The present disclosure relates generally to impairment assessment devices and methods for detecting drug impairment, alcohol impairment, impairment due to fatigue, and/or the like, and to metrics used to detect or indicate a state of impairment in a test subject due to use of drugs or alcohol, and more particularly to metrics used in connection with a virtual-reality (“VR”) environment that implements drug and alcohol impairment tests, where the metrics are used to detect or indicate impairment.
Impairment can be brought about by or as the result of ingesting or otherwise introducing an intoxicating substance, such as alcohol or a drug. Excessive fatigue due to lack of sleep, or certain illnesses, can also cause impairment. Law enforcement officers commonly engage in the detection of a person's impairment, such as during traffic stops or other situations that may arise during the officers' line of duty.
Law enforcement officers currently have access to devices, such as a breathalyzer, which can detect or indicate impairment due to alcohol. However, there is no accepted or ubiquitous device such as the breathalyzer for marijuana and other non-alcoholic drugs. Accordingly, since law enforcement officers do not currently have access to roadside or otherwise portable impairment detectors, decisions regarding impairment typically rely on the subjective judgement of individual officers.
In addition, often a certified Drug Recognition Expert (“DRE”) is required to make a decision on a person's impairment. However, the training, certification, and re-certification, required by DREs, can be time consuming and costly.
Thus, there is a need for an easy to use impairment assessment device that employs objective, and highly repeatable metrics to assist law enforcement officers in gathering drug impairment indicators. As a result, officers and other officials or test administrators will be empowered to make on-site decisions without needing a certified DRE. Moreover, training and recertification costs will be reduced, allowing time and resources to be redirected to other areas of need.
Disclosed herein are impairment detection systems and methods that employ various metrics. The systems and methods suitably create a virtual-reality (“VR”) environment that implements tests from Standard Field Sobriety Tests (“SFSTs”) and other drug and alcohol impairment tests used by police officers in the field. The exemplary metrics are configured to permit such impairment tests to be implemented as closely as possible to guidelines established by police officers and other agents such as drug recognition experts (“DREs”).
More specifically, the impairment tests implemented by the exemplary metrics for evaluation include, but are not limited to, one or a combination of: (a) Horizontal Gaze Nystagmus Test—assesses the ability of a test subject to smoothly track a horizontally moving object and checks for eye stability during the test; (b) Vertical Gaze Nystagmus Test—checks for eye stability as the test subject tracks a vertically moving object; (c) Lack of Convergence Test—checks the ability of the test subject to cross his or her eyes when an object is brought towards the bridge of the subject's nose; (d) Pupil size and response test—measures the subject's pupil size in normal lightning conditions, as well as abnormally dark and bright conditions; and, (e) Modified Romberg Balance Test—tests the subject's ability to follow directions, measure time, and balance.
The exemplary metrics are implemented with these impairment tests in a virtual world through use of a VR headset configured to include eye tracking hardware and software. As each test is conducted, the exemplary eye tracking hardware and software is capable of accurately measuring pupil size, pupil position, and eye gaze direction independently for each eye at a high sample rate.
In order to make determinations of the test subject's level of impairment, the presently disclosed metrics are used to determine various useful values from the eye tracking data collected during each time step of the VR simulation. The eye tracking data being informed with such metrics is then output as useful information from which determinations of impairment can made objectively, repeatedly, reliably, and accurately, while eliminating or substantially reducing the subjective nature inherent in previous manual impairment tests performed in the field.
These and other non-limiting characteristics of the disclosure are more particularly disclosed below.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
The following is a brief description of the drawings, which are presented for the purposes of illustrating the exemplary embodiments disclosed herein and not for the purposes of limiting the same.
A more complete understanding of the components, processes and apparatuses disclosed herein can be obtained by reference to the accompanying drawings. These figures are merely schematic representations based on convenience and the ease of demonstrating the present disclosure, and are, therefore, not intended to indicate relative size and dimensions of the devices or components thereof and/or to define or limit the scope of the exemplary embodiments.
Although specific terms are used in the following description for the sake of clarity, these terms are intended to refer only to the particular structure of the embodiments selected for illustration in the drawings and are not intended to define or limit the scope of the disclosure. In the drawings and the following description below, it is to be understood that like numeric designations refer to components of like function.
The singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise.
As used in the specification and in the claims, the terms “comprise(s),” “include(s),” “having,” “has,” “can,” “contain(s),” and variants thereof, as used herein, are intended to be open-ended transitional phrases, terms, or words that require the presence of the named components/ingredients/steps and permit the presence of other components/ingredients/steps. However, such description should be construed as also describing systems or devices or compositions or processes as “consisting of” and “consisting essentially of” the enumerated components/ingredients/steps, which allows the presence of only the named components/ingredients/steps, along with any unavoidable impurities that might result therefrom, and excludes other components/ingredients/steps.
Numerical values in the specification and claims of this application should be understood to include numerical values which are the same when reduced to the same number of significant figures and numerical values which differ from the stated value by less than the experimental error of conventional measurement technique of the type described in the present application to determine the value.
All ranges disclosed herein are inclusive of the recited endpoint and independently combinable (for example, the range of “from 2 grams to 10 grams” is inclusive of the endpoints, 2 grams and 10 grams, and all the intermediate values).
A value modified by a term or terms, such as “about” and “substantially,” may not be limited to the precise value specified. The modifier “about” should also be considered as disclosing the range defined by the absolute values of the two endpoints. For example, the expression “from about 2 to about 4” also discloses the range “from 2 to 4.” The term “about” may refer to plus or minus 10% of the indicated number.
The following examples are provided to illustrate the methods, processes, systems, apparatuses, and properties of the present disclosure. The examples are merely illustrative and are not intended to limit the disclosure to the materials, conditions, or process parameters set forth therein.
With reference to
As used herein, the VR headset 102 encompasses both virtual reality headsets that provide an immersive experience in which the physical surroundings are not visible when wearing the VR headset 102 and the entire viewed content is the generated artificial visual content (i.e., virtual scene), as well as augmented reality headsets in which the VR headset 102 has transparency allowing for the wearer to see the physical surroundings with the generated virtual scene being superimposed on the physical surroundings.
As shown in
More particularly, eye tracking component software 120 includes computer program code configured to locate, measure, analyze, and extract data from a change in one or more features of a test subject's eyes. The change in one or more features of the test subject's eyes is generally induced by a moving object to be tracked by the test subject's eyes in a virtual scene displayed on the screen 108 of the VR headset 102.
Other changes in the one or more features of the test subject's eyes can be induced, for example, by changing one or more virtual environmental conditions of the virtual scene displayed on the screen 108 of the VR headset 102 (e.g., the brightness of the virtual scene). The local memory 116 stores the instructions 118 to implement the eye tracking software 120, and the instructions 118 are configured to perform at least part of the method illustrated in
The data generated during processing by the eye tracking hardware 106 and software 120 can be stored in non-transitory data memory 132, which is separate or integral with local memory 116. In addition, or alternatively, data generated by the eye tracking hardware 106 and software 120 can be output to the host computer 104 for further processing, via input/output (I/O) device 122.
As illustrated in
In some embodiments, the one or more additional sensor components 110 of the VR headset 102 include but are not limited to cameras 110a (which may be the infrared-sensitive sensors of eye tracking hardware 106 or may be additional cameras), body tracking sensors 110b, infrared (“IR”) sensors 110c, G-sensors 110d, gyroscopes 110e, proximity sensors 110f, and electrodes 110g for obtaining electroencephalogram (EEG) data. The cameras 110a further optionally include a video recording device which records eye movement during testing.
The host computer 104 typically includes a variety of additional hardware components not shown in
The various non-transitory memories, e.g. the local memory 116, the data memory 134, and the main and data memories of the host computer 104, may be variously embodied, for example as an electronic memory (e.g. flash memory or solid state drive, i.e. SSD), a magnetic memory (e.g., a magnetic hard drive), an optical memory (e.g. an optical disk), various combinations thereof, and/or so forth. Moreover, it will be appreciated that the various software may be variously stored in one or any combination of the various memories, and that the disclosed impairment assessment processing may be performed by one or more of the on-board processor 114 of the VR headset 102 and/or the processor of the host computer 104.
The processor and software components of host computer 104 are generally configured to analyze, extract, calculate, and/or correlate information from the raw data generated by the eye tracking hardware 106 and stored in data memory 132 of the VR headset 102. The data memory of the host computer 104 can be separate or integral with the main memory and stores data produced during execution of the instructions by the processor. The data stored in the main and data memories of the host computer 104 can be output (via one or more I/O devices) as impairment indicator information 140. An impairment prediction 142 (i.e., degree and/or probability of impairment), based on the impairment indicator information 140, may also be output via the one or more I/O devices of the host computer 104.
The VR headset 102 is generally communicatively connected with the host computer 104 by a wired or wireless link 144. The wired or wireless link 144 is generally configured to interface with the one or more I/O devices of the host computer 104 and may include the Internet, Bluetooth, USB, HDMI, and/or DisplayPort, for example. Thus, all the data stored in memory 116 which has been generated by the eye tracking hardware 106 of the VR headset 102 can be communicated via wired or wireless link 144 and received by the one or more I/O devices of the host computer 104.
In addition, the VR headset 102 can optionally be configured to run the software components of the host computer 104 mentioned above and described in further detail below. Such a configuration for the VR headset 102 may be desirable if the headset needs to operate in a stand-alone manner without host computer 104, e.g. during a traffic stop, while deployed away from the host computer 104 in the field (i.e., concert, sporting event, political event, or other type of venue or event), and the like.
The software components of the VR headset 102 or host computer 104 may include code, which when executed by the processor 114 (or host computer 104 processor) causes the corresponding processor to communicate with a user or test administrator via the screen 108 or display device 105 of the host computer 104. For example, once instructed by a user or test administrator, a user interface of the host computer 104 can cause screen 108 of the VR headset 102 (or host display device 105) to display any number of virtual scenes. Each virtual scene generally includes one or more dynamic component(s) configured to generate a change in one or more features of a subject's eye(s). As discussed above, the eye tracking component 106 of the VR headset 102 is configured to locate, measure, analyze, and extract data from the change in one or more eye features which has/have been induced by the virtual scene displayed on the screen 108 by the user interface. In addition, when the host computer 104 includes a separate display device 105, real-time test data can be shown on the display device and include, for example, graphical representations of eye position, graphs, charts, etc.
The software components of the VR headset 102 or host computer 104 may further include a testing component having code, which when executed by the electronic processor 114 (or host computer 104 electronic processor) causes the corresponding processor to store and retrieve information from memory which is necessary to perform various impairment tests, including but not limited to one or more of: lack of convergence (“LOC”), horizontal and vertical gaze nystagmus (“HGN” and “VGN”, respectively), pupil dilation, color sensitivity, and targeting. The type of information typically retrieved with the testing program includes, but is not limited to: predetermined testing parameters/equations for each impairment test; and, the raw data generated by the eye tracking component 106 which can be stored in data memory 132 of the VR headset 102 or in the memory of host computer 104.
Another software component which the VR headset 102 or host computer 104 can optionally include is an impairment testing component having code, which when executed by the processor 114 (or host computer 104 processor) causes the corresponding processor to retrieve user data on the subject undergoing the test. User data can be input through one or more peripheral devices communicatively connected to the VR headset 102 and/or host computer 104. Once the information is retrieved, the testing component inputs the information into the testing parameters/questions to determine output parameter values for each impairment test performed. The parameter values output from the testing component will subsequently be used to determine a test subject's level of impairment and can optionally be stored in data memory 132 of the VR headset 102 or in the memory of host computer 104.
The software components of the VR headset 102 or host computer 104 may further include a processing/comparison software component having code, which when executed by the processor 114 (or host computer 104 processor) causes the corresponding processor to correlate the retrieved testing parameters and associated output values from the testing component with a corresponding baseline standard of impairment/non-impairment and its associated parameter values. More particularly, each of the testing parameters utilized by the testing component are compared with local data containing predetermined or premeasured baseline standards and corresponding parameter values of impairment/non-impairment. If a match is found between the testing parameters and the baseline standards, the associated baseline parameter values, or a representation thereof, is/are extracted. The local data of baseline standards and the correlations made by the processing/comparison component can optionally be stored in data memory 132 of the VR headset 102 or in the memory of host computer 104.
In some configurations, after the processing/comparison component has made correlations, an optional decision software component of the VR headset 102 or host computer 104 is utilized. The decision software component includes code, which when executed by the processor 114 (or host computer 104 processor) causes the corresponding processor to predict a level of impairment (that is, predict a probability and degree of impairment of a test subject), based on the correlated parameter values determined by the processing/comparison component. That is, for any testing parameter and baseline standard being correlated by the processing/comparison component, if the testing parameter output value(s) exceeds one or more thresholds (e.g., value(s) over a period of time, too many high and/or low values, total value too high/too low, etc.) set for the corresponding baseline output value(s), the decision component may output a prediction 142 that the test subject is impaired at an estimated degree.
The impairment prediction 142 of the decision component can optionally be stored in data memory 132 of the VR headset 102 or in the memory of host computer 104. In addition, or alternatively, the impairment prediction 142, or a representation thereof, can be output to the test subject or test administrator via the output component. The output component can output the impairment prediction 142 alone or together with the correlated baseline standards, associated baseline parameter values, testing parameters, and associated testing parameter values.
In other configurations, the decision software component is not utilized such that neither the VR headset 102 nor host computer 104 will make an impairment prediction. In such embodiments, a user or administrator of the VR headset 102 and/or host computer 104 may prefer to make his/her own impairment prediction based on a review of the impairment indication information 140.
In any event, the output component of the VR headset 102 or host computer 104 includes code, which when executed by the processor 114 (or host computer 104 processor) causes the corresponding processor to output one or both impairment indication information 140 and impairment prediction 142, or a representation thereof. More particularly, information 140 and prediction 142 are output to the user interface, such that the screen 108 of the VR headset 102 and/or display device 105 of the host computer 104 can display the information to the test subject or test administrator. Moreover, the eye data saved for each test subject in information 140 is saved in at least one of the memory components of the VR headset 102 or host computer 104.
Generally, the information 140 is saved in an appropriate format which enables the loading and replaying of test data files for any test subject. If desired, the entire test for a test subject can be replayed using the animated eyes 149 shown on the display device 105 of the host computer 104 as described above. In some particular examples, the information 140 can be saved to memory in the XML file format. In other examples, the information 140 can be saved to memory as a report file written in markdown, such as an R Markdown file. Markdown files like R Markdown are written in plain text format containing chunks of embedded R code configured to output dynamic or interactive documents. The R Markdown file can be knit, a process where each chunk of R code in the file is run and the results of the code are appended to a document next to the code chunk. R Markdown files can also be converted into a new file format, such as HTML, PDF, or Microsoft Word, which preserves the text, code results, and formatting contained in the original R Markdown file.
The processing/comparison component described above can further include computer program code configured to direct the processor 114 (or processor of the host computer 104) to compare the output values and/or baseline standards with one or more confidence metrics stored in the local data-store. For example, one confidence metric includes historical data of each individual impairment test result which is accessed by the processing/comparison component to assess the confidence of an indication of impairment. Such historical data could further include drug class identification results with probability or percent matches associated with one or more drug classes. Each of the testing parameters utilized by the testing component are compared with these confidence metrics in the local data-store, and if a match is found between the testing parameters/baseline standards and the confidence metrics, the associated confidence metric, or a representation thereof, is/are extracted and are optionally stored in data memory 132 of the VR headset 102 or in the memory of host computer 104.
In addition, or alternatively, the baseline standards, associated baseline parameter values, and associated confidence metrics from the local data-store that have been matched with the testing parameters and associated values output from the testing component are output as part of impairment indication information 140. As illustrated in
Some of the aforementioned testing parameters are directed to the state or status of the system 100 itself. For example, the timestamp 172 testing parameter refers to the time that each set of data originates from, measured in seconds, minutes, hours, etc. The test state 174 refers to an integer representing what part of the test is running at the time the sample is taken. For example, the integer “1” may be a test state integer indicating that a first part of the lack of convergence test (“LOC”) was running at a timestamp of 30 seconds into the test.
Other testing parameters are directed toward information and data that may be useful for the aforementioned pupil size and response test, along with the color sensitivity test. For example, the scene settings 176 refers to various characteristics of the scene displayed on the screen 108 of the VR headset 102, including but not limited to scene brightness and scene colors. The brightness in the scene settings 176 is changed for the pupil response test, and specific colors in the scene settings are changed for the color sensitivity test. For example, in the color sensitivity test, VR headset 102 is configured to observe whether the test subject responds to yellow and/or blue colors. In this regard, yellow/blue color vision loss is rare and thus serves as an indicator of impairment. Left pupil size 178 refers to the size of the test subject's left pupil, measured in millimeters by the eye tracking hardware 106 and software 120. Right pupil size 180 refers to the size of the test subject's right pupil, measured in millimeters the eye tracking hardware 106 and software 120.
Some of the other testing parameters are directed toward information and data that may be useful for the aforementioned horizontal and vertical gaze nystagmus tests, as well as the lack of convergence test. For example, the eye gaze to target cast distance 182 refers to the distance between the point where the test subject is looking and the object the test subject is supposed to be looking at, measured in meters by the eye tracking hardware 106 and software 120. The eye gaze to target cast distance 182 is calculated separately for each eye, and the estimated overall point of focus with both eyes is calculated with the eye tracking software 120. The eye gaze to target cast vertical angle 184 refers to the angle between the test subject's gaze and a direct line from their eyes to the tracking object, measured in degrees on the vertical plane by the eye tracking hardware 106 and software 120. The eye gaze to target cast vertical angle 184 is also calculated for each eye and the total gaze. The eye gaze to target cast horizontal angle 186 refers to the angle between the test subject's gaze and a direct line from their eyes to the tracking object, measured in degrees on the horizontal plane by the by the eye tracking hardware 106 and software 120. The eye gaze to target cast horizontal angle 186 is also calculated for each eye and the total gaze. The eye horizontal angle to normal 188 refers to the angle of each eye's gaze relative to the forward direction of the test subject's head, measured in degrees on the horizontal plane by the eye tracking hardware 106 and software 120. The eye vertical angle to normal 190 refers to the angle of each eye's gaze relative to the forward direction of test subject's head, measured in degrees on the vertical plane by the eye tracking hardware 106 and software 120.
The remaining testing parameters mentioned above are related to eye movement in general, which may be useful for all the aforementioned impairment tests. The eye position 194 refers to the X and Y coordinate position of each of the test subject's pupils within the eye socket, measured by the tracking hardware 106 and software 120. The eye jitter 196 refers to the angle between each test subject's eye's direction and the direction of each eye at the last sample, measured in degrees by the eye tracking hardware 106 and software 120. Eye position 194 and eye jitter 196 information may be particularly useful for a targeting test which measures the ability to detect the presence of an object that appears in a test subject's field of view and the test subject's ability to focus their gaze on that object. The test subject is instructed to focus their gaze on the target object when detected, and the appropriate eye data is measured and recorded upon detection.
Various impairment tests were performed using a VR headset 102 according to the embodiments described above. That is, a VR headset 102 configured for detecting impairment of a test subject, as discussed above, was used at an alcohol and cannabis “wet lab”, where controlled doses of alcohol and cannabis were administered to one or more volunteer test subjects. During the lab, the test subjects were asked to wear a VR headset 102 configured to act as an impairment sensor. Data was then gathered when the test subjects were sober and subsequently impaired due to alcohol and then cannabis.
One test subject was used to provide representative results for alcohol impairment (hereinafter referred to as “test subject A”), and a different test subject was used to provide representative results for cannabis impairment (hereinafter referred to as “test subject B”). Thus, the test subjects were able to provide sober baseline measurements before consuming alcohol and before consuming cannabis. The various impairments tests were then administered to test subject A at varying blood alcohol content (BAC) levels, such that measurements of alcohol impairment could be obtained. More particularly, the impairment tests were administered at a BAC of 0 (baseline), a BAC of about 0.116, and a BAC of about 0.146.
The impairment tests were next administered to test subject B at various times after smoking cannabis, such that measurements of cannabis impairment could be obtained. More particularly, the impairment tests were administered at post-cannabis smoking times in accordance with Table 1 below:
All the relevant eye and testing data was recorded by sensor software of the VR headset 102 during each test.
Nine (9) tests were administered to the test subjects using the VR headset 102. These nine tests included: (1) an equal pupil test; (2) an HGN test; (3) a pupil rebound test; (4) an HGN45 test; (5) an LOC test; (6) a Modified Romberg test; (7) a pupil size during HGN test; (8) an HGN during HGN45 test; and, (9) a targeting test. During each test, the VR headset 102 tracked both the test subject's eyes and gaze relative to an object. The results of these impairment tests from the two individual test subjects are discussed in greater detail below and are shown by the charts and plots illustrated in
Equal Pupil Test
The equal pupil test was administered to both test subjects A and B to determine differences in pupil size which may be indicative of impairment. The VR headset 102 induced changes in pupil size by exposing both test subjects to a bright light and measuring the change in pupil size. The results of the equal pupil test are shown in
The differences between right and left eye pupil sizes were determined by subtracting left pupil size from right pupil size at each second interval shown on the X-axes of
The boxplots of
The results of the equal pupil test shown in the charts and plots of
Equal Pupil Test Results
With reference to the boxplot of
In the boxplot of
HGN Tracking Test
The HGN tracking test was administered to both test subjects A and B to determine whether nystagmus occurred in the test subjects' eyes which may be indicative of impairment. The VR headset 102 performed the HGN test by moving an object to the edge of the test subject's vision to induce nystagmus or jitter in the subject's eyes and tracking the response. The results of the HGN test are shown in
Nystagmus was determined from the HGN Tracking test results for each test subject by using the left eye angle to normal variable defined above (the right eye could also be used). The goal of the HGN tracking test is to quantify how smoothly each test subject can track a target displayed by the VR headset 102. Smoother tracking results are assumed to be indicative of less impairment and jittery tracking results are assumed to be indicative of greater impairment.
The raw data obtained during the HGN tracking test for test subject A is provided in the chart of
Next, a line was fit through every 3 points of the smoothed curve. An exemplary smooth curve which fits through all 3 points is shown in the left-side chart of
The results of the HGN tracking test shown in the charts and plots of
HGN Tracking Test Results
As shown in the boxplot of
As shown in the boxplot of
Pupil Rebound Test
The pupil rebound test measures the reaction to light for both test subjects by examining how their pupils responded to changing light intensities. This was conducted by putting each test subject in a low light condition for a period of time, then quickly shining a bright light into the eyes, thereby causing the pupils to constrict. Only the left pupil size was used for this test, but the right eye could also be used.
As shown in the exemplary chart of
For purposes of concision, only the raw data for test subject B is provided as shown in the charts of
The data obtained from the VR headset during the pupil rebound test is first analyzed by considering only the pupil size data obtained after the bright light is applied. With reference to
The segmented regression process first involves calculating the slope of the line between the point when the bright light is applied and the point when the pupil size levels off is calculated. In other words, the pupil size at the first time the brightness reaches level 20 gives one point (time_1, size_1). The time at which the pupil size levels off and the corresponding pupil size at the leveling-off time gives a second point (time_2, size_2). These two points can be seen on the right side of the chart presented in
The results of the pupil rebound test in the charts and plots of
Pupil Rebound Test Results
Referring to
HGN45 Test
The HGN45 test, represented in
HGN45 Test Results
In
In
LOC Test
The LOC test, represented by
Thus, the goal of the LOC test is to quantify each test subject's ability or inability to cross their eyes when following the object displayed by the VR headset 102. The angles of the test subjects' eyes, as shown by the Y-axes of the charts in
In order to implement the LOC test on the VR headset 102, an algorithm was developed in two primary stages to quantify LOC considering the location of the tracked object. The first stage of algorithm development for the LOC test was to normalize the raw LOC test data curves by taking the absolute values of the differences between the right and left eye H angles. The second stage was to find windows for when the target object was close to and far from the test subject's eyes. The steps of each algorithm development stage are described in further detail below and are at least partially represented by the charts and plots illustrated in
It is noted that
As mentioned above, the first stage of algorithm development involved normalizing the raw LOC test data curves. Step 1 of the first stage of the algorithm development was to remove irregularities in the data due to blinking of the test subject's eyes. In order to characterize the blinks, the variables for the eye H angles were set to a value of 999.
Step 2 of the algorithm development's first stage was implemented to correct errors potentially introduced by the blink removal procedure of step 1. These errors may arise because the blink removal procedure results in the elimination of some of the raw LOC test data. To correct for such errors in step 2, R programming was used to approximate the right and left eye curves so that values could be taken at identical time points. In particular, the R function “approx” was used to approximate the curves.
After completion of steps 1 and 2, the differences between right and left eye H angles to normal were determined in step 3 by subtracting the left eye H angles to normal from the right eye H angles to normal at each second interval shown on the X-axes of
As briefly discussed above, the second stage of algorithm development was to find windows for when the target object was close to and far from the test subject's eyes. With reference to the right-side Y-axis of
Looking at the overlaying curves illustrated in
Based on the chart of overlaying curves illustrated in
In step 2 of the second stage, the differences between the eye H angles to normal and the vector change positions (i.e., the blue dots in
The results of the LOC test shown in the charts and plots of
LOC Test Results
Referring to
With reference to the boxplots of
Modified Romberg Test
The results of the Modified Romberg test for test subject B are illustrated in
It is noted that, for purposes of the impairment testing examples disclosed herein, the Modified Romberg test was only administered to test subject A. However, the Modified Romberg test could be administered to test subject B if desired. The methodology for implementing and administering the Modified Romberg test with the VR headset for test subject B would be identical to the methodology described below for test subject A.
In order to implement the Modified Romberg test using the VR headset, an algorithm was developed to determine the amount of deviation from initial head position over time. The amount of time at which the subject estimates 30 seconds have passed is also measured. This data is obtained and analyzed to find large deviations from origin which would indicate impairment.
Variables referred to as CameraPositionVectorX, CameraPositionVectorY, and CameraPositionVectorZ were used in the algorithm for the head position coordinates. A variable referred to as EyeOpenState was also used for the start time.
The analysis of the data obtained from the Modified Romberg test begins by finding the test start time (i.e., the first time the EyeOpenState variable has a value of 4 for eyes closed). Next, the test start time is normalized to zero. All coordinates of head position are also normalized so that when the test starts the origin is (x,y,z)=(0,0,0). Then, starting from the origin, the distance of each test subject's head position is calculated over time using the distance equation:
The blue dotted lines in the plot of
The results of the Modified Romberg test in the plot of
Modified Romberg Test Results
Referring to
Pupil Size During HGN Test
Turning now to
Upon analysis of the charted data in
The second algorithm development step was to find the mean of the smoothed curve in
The third step in developing the algorithm was to use the intersection of the mean line with the curve (the “cross points” marked on the curve of
Next, in the fourth step of developing the algorithm, the left valleys were paired to the right peaks to create multiple peak-valley pairs.
At the fifth step, the peak value was divided by the valley value to obtain the peak-to-valley ratio (only one peak-to-valley ratio for each peak-valley pair). The distribution of these peak-to-valley ratios can be seen in the boxplot illustrated in Figure
The results of the pupil size during HGN test as illustrated in
Pupil Size During HGN Test Results
Referring to the boxplot of
Moreover, it is noted that some problems may arise using the peak and valley detecting algorithm described above. Ideally, it is desirable to modify the data as little as possible. However, the smoothing of the raw HGN tracking test data does require data modification which may result in peaks and/or valleys being missed if the curve does not cross the mean line at the peak/valley points. Thus, the smoothing step can potentially result in an inaccurate peak-to-valley ratio.
A method was thus developed to address the potential weaknesses discussed above. The aforementioned method is illustrated throughout
In order to implement persistent homology here, many lines are used to determine the location of the local maximums and minimums of the curve instead of one line (such as the mean line discussed above).
Thus, the time series curves can now be represented as a birth-death diagrams which can be compared using a mathematical metric since individual birth-death diagrams will look different for different curves. Such a comparison is illustrated between the baseline birth-death diagram of
Referring now to
Pupil Size During HGN Test with Persistent Homology Results
With continued reference to
Referring now to
Horizontal Gaze Nystagmus During HGN45 Test
Turning now to
The data necessary for the HGN during HGN45 test was obtained using the right and left eye H angle to normal data from the HGN45 test data files described above. In order to implement HGN during HGN45 test on the VR headset, an algorithm was developed to detect the clusters of spikes. The algorithm development steps included first detecting when a spike occurs and then finding when a cluster of spikes occurs.
In order to detect the occurrence of a spike, irregularities in the data due to blinking of the test subject's eyes were first removed. In order to characterize the blinks for removal, the variables for the eye H angles to normal were set to a value of 999. Then, R programming language was used to find the spikes. More particularly, the “find_peaks” algorithm from the ggpmisc package in R was used with a span of 71 to find the spikes. The spikes are represented by the blue dots illustrated in
In order to detect clusters of spikes indicative of nystagmus, a cluster was first defined as spikes which occur within a 2 second threshold of each other. However, a different threshold could also be used if desired. The clusters of spikes based on this definition are represented by the purple dots and designated as such, and individual spikes are represented as blue or undesignated dots illustrated in
Next, it was desirable to detect the starting point for clusters of spikes indicative of nystagmus. The angle at which these clusters start gives the onset angle of nystagmus, and the start of the cluster is defined as the first spike in a cluster that occurs more than 10 seconds before the next cluster. However, a threshold value other than 10 seconds could also be used if desired. Cluster starting points based on this definition are represented by the orange dots (and designated as such) illustrated in
Next, the differences between the values represented by the orange dots (i.e., the starting points of clusters and the angles of onset of nystagmus) was found for all occurrences. Distributions for these differences were then analyzed to draw conclusions from the HGN during HGN45 test data obtained from test subject A. The distribution results for test subject A are illustrated in the boxplot of
Moreover, this methodology was also applied to the HGN tracking test results obtained from test subject B as shown in
The results of the HGN during HGN45 test shown in the charts and plots of
Results of Horizontal Gaze Nystagmus During HGN45 Test
With reference to the boxplot of
With reference to the boxplot of
With reference to the boxplots of
Targeting Test
Referring now to
It is noted that, for purposes of the impairment testing examples disclosed herein, the targeting test was only administered to test subject B. However, the targeting test could be administered to test subject A if desired. The methodology for implementing and administering the targeting test with the VR headset for test subject A would be identical to the methodology described below for test subject B.
In order to implement the targeting test on the VR headset 102, an algorithm was developed to determine the time it took for the subject to identify the target in their field of vision and the time it took for the subject to accurately track the target. With reference to the baseline results illustrated in
The steps of the developed algorithm included first removing blinks by setting a gaze to target cast H angle of 999 degrees. Next, the gaze to target cast H angle data was normalized so that all values were positive. Then, the time when the target appeared in the test subject's field of view was found. Since the x value of the target had unique discrete values throughout the test, the time that the target appeared is the time when these discrete values changed.
Next, the first time when the gaze to target cast H angle was within 4 degrees (i.e., the reaction time, represented by the blue dots in
The results of the targeting test as illustrated in
Targeting Test Results
With reference to the boxplot of
With reference to the boxplot of
Some further embodiments are described below.
Some embodiments comprise a set of metrics as shown in
Some embodiments comprise a set of metrics as shown in
Some embodiments comprise a set of metrics as shown in
Some embodiments comprise a set of metrics as shown in
Some embodiments comprise a set of metrics as shown in
Some embodiments comprise a set of metrics as shown in
Some embodiments comprise a set of metrics as shown in
Some embodiments comprise a set of metrics as shown in
Some embodiments comprise a set of metrics as shown in
It will be appreciated that variants of the above-disclosed and other features and functions, or alternatives thereof, may be combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.
To aid the Patent Office and any readers of this application and any resulting patent in interpreting the claims appended hereto, applicants do not intend any of the appended claims or claim elements to invoke 35 U.S.C. § 112(f) unless the words “means for” or “step for” are explicitly used in the particular claim.
This application claims the benefit of U.S. Provisional Application Ser. No. 63/041,170 filed Jun. 19, 2020 and titled METRICS FOR IMPAIRMENT DETECTING DEVICE. U.S. Provisional U.S. Provisional Application Ser. No. 63/041,170 filed Jun. 19, 2020 and titled METRICS FOR IMPAIRMENT DETECTING DEVICE is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63041170 | Jun 2020 | US |