This disclosure relates generally to the field of human machine interfaces, and more specifically to the field of human machine interface design, safety, and testing. Described herein are systems and methods for testing and analyzing human machine interfaces.
Testing human machine interfaces and user interfaces is a complex problem requiring real world scenarios that prompt real world human interactions with the interfaces.
The problem with commercially available systems is that they are complicated to install (e.g., require expert installation), have security concerns (when transmitting data over the internet is not preferable), are expensive (e.g., full-scale simulators cost thousands of dollars), are not easily shippable or are otherwise not mobile, and are poor predictors of how actual user interfaces will perform when real users are using them. Further, the data derived from currently available systems does not readily provide the ability to annotate the data or analyze the data in a collaborative environment, often because current systems store the data in individual files which are difficult to collectively share, analyze, and annotate.
Accordingly, there exists a need for new systems and methods for testing and evaluating human machine and user interfaces.
In some aspects, the techniques described herein relate to a computer-implemented method of evaluation a control interface based on user interaction, including: displaying, on a display, an actual environment user interface (UI) to a user, wherein the actual environment UI is a test UI of a vehicle; receiving, from one or more sensors, signals indicative of a first user interaction with the displayed actual environment UI; calculating a plurality of parameters of the first user interaction with the actual environment UI based on the received signals indicative of the first user interaction, wherein the plurality of parameters include one or more of: a total eyes off road time metric, a task completion time, a subtask completion time, or a performance score; and outputting an indication for the actual user interface.
In some aspects, the techniques described herein relate to a computer-implemented method of evaluating control interfaces based on user interactions in simulated and actual environments, including: displaying, on a display, a simulated environment user interface (UI) to a user, wherein the simulated environment UI represents a test UI of a vehicle; receiving, from one or more sensors, signals indicative of a first user interaction with the displayed simulated environment UI; calculating a plurality of parameters of the simulated environment UI based on the received signals indicative of the first user interaction, wherein the plurality of parameters include one or more of: a total eyes off road time metric, a task completion time, a subtask completion time, or a predictive score; displaying, on the display or a second display, an actual environment UI to the user, wherein the displayed actual environment UI corresponds to the displayed simulated environment UI, and wherein the actual environment UI is the test UI of the vehicle; receiving, from one or more sensors, signals indicative of a second user interaction with the displayed actual environment UI; calculating a second plurality of parameters of the actual environment UI based on the received signals indicative of the second user interaction, wherein the second plurality of parameters include one or more of: a total eyes off road time metric, a task completion time, a subtask completion time, or a predictive score; comparing the first plurality of parameters to the second plurality of parameters; and outputting an indication for one or both of the simulated or actual environment UI.
In some aspects, the techniques described herein relate to a computer-implemented method of simulating user interaction with control interfaces, including: displaying, on a display, a simulated environment user interface (UI) to a user, wherein the simulated environment UI represents a test UI of a vehicle; receiving, from one or more sensors, signals indicative of a first user interaction with the displayed simulated environment UI; calculating a plurality of parameters of the simulated environment UI based on the received signals indicative of the first user interaction, wherein the plurality of parameters includes: one or more of: a total eyes off UI time metric, a task completion time, a subtask completion time, or a performance score; and outputting an indication for the actual user interface.
The foregoing is a summary, and thus, necessarily limited in detail. The above-mentioned aspects, as well as other aspects, features, and advantages of the present technology are described below in connection with various embodiments, with reference made to the accompanying drawings.
The illustrated embodiments are merely examples and are not intended to limit the disclosure. The schematics are drawn to illustrate features and concepts and are not necessarily drawn to scale.
The foregoing is a summary, and thus, necessarily limited in detail. The above-mentioned aspects, as well as other aspects, features, and advantages of the present technology will now be described in connection with various embodiments. The inclusion of the following embodiments is not intended to limit the disclosure to these embodiments, but rather to enable any person skilled in the art to make and use the contemplated invention(s). Other embodiments may be utilized, and modifications may be made without departing from the spirit or scope of the subject matter presented herein. Aspects of the disclosure, as described and illustrated herein, can be arranged, combined, modified, and designed in a variety of different formulations, all of which are explicitly contemplated and form part of this disclosure.
In general, the systems and methods described herein provide a systematic, repeatable, quantitative way to measure user interaction to understand the performance of complex systems with the user as a component and how the user, the system, and the interaction of the two is affecting overall performance. This system can be used by product teams to improve their products performance and safety, and the processing of data using the software, provides annotated data that can then be further processed into algorithms and/or fed into artificial intelligence systems.
As used herein, human machine interface (HMI), user interface (UI), a graphical user interface (GIU) may be used interchangeably in that the various systems, methods, and devices described herein may be applied to analyze HMI, UI, and GUI.
In general, a user of an HMI/UI may include a technician, design engineer, user interface designer, user experience designer, a front-end developer, visual designer, quality control engineer, interaction designer, user researcher, product designer, a general population person, etc. Identifying information of the user may be hidden or de-identified in any of the sensor data collected.
In general, the systems described herein are configured to capture user interaction data, including touch, voice, movement, visual attention, and facial expression, paired with system and environment or contextual data. In some embodiments, user interaction data comprising movement data may also include measuring multitasking activities of the user, for example phone use. These data are then ported to an analysis software for replay, analysis, annotation, and processing to be used in supporting engineering, testing, and/or safety analytics. Such data may be prepared for post-processing to build machine learning models for autonomy, application specific integrated circuits, and software.
In general, the systems and methods described herein may be used with any system that employs HMIs/UIs. Exemplary, non-limiting examples of such systems include: command and control systems, battle management systems, ground station systems, Supervisory Control and Data Acquisition (SCADA) systems, industrial systems, manned or unmanned spacecraft systems, airplane systems, radar systems, sonar systems, air traffic control systems, car systems, Vertical Takeoff and Landing (VTOL) aircraft systems, tank systems, armored vehicle systems, remotely operated submersible systems, remotely operated rover systems for space exploration and bomb disposal (EOD), all manner of HMI on ship and submarine systems, missile warning and defense ground station systems, orbit servicing and assembly ground stations systems and control room systems, multi-user operations with unmanned systems, multi-user operations with combinations of vehicles and command and control in complex operations systems, agricultural vehicle systems, mining vehicle systems, etc. Additionally, or alternatively, the systems and methods described herein may be used in modeling, analyzing, and/or testing of cyber security operations, either with a single operator, in a cyber security operations center (SOC), or networked across multiple users or SOCs in different locations.
Further, since some of the human machine interfaces (HMIs) that may be tested may be pre-market and/or test HMIs, any of the user interfaces (UIs) or HMI described herein may be configured such that they prevent imaging of the UI or HMI by the user or at least record when a user has taken images or recordings of the UI or HMI. Further for example, there might be an obscure or imperceptible phrase or time stamp that will show up in any pictures taken of the interface. Still further, for example, there may be imperceptible variations of each UI so that if a user ‘leaks’ a design, it may be determined who was the source of the leak.
As identified above, the problem with commercially available systems is their complex installation, security, expense, mobility, and predictive capacity. The technical solution to such technical problems is to provide data-driven devices and/or software to assist in automotive design, in particular UI and HMI design. This disclosure describes data collection devices and methods, data analysis and practical applications of the data analysis, and a machine learning approach.
Data collection in the systems described herein is accomplished through in-vehicle sensors recording the user interactions with an actual vehicle or a user interface of the vehicle (actual environment), and in-home device sensors recording the user interactions with a simulated vehicle or a user interface of the vehicle (simulated environment).
The simulated environments and actual environments described herein are configured to create situations in which the user should behave substantially consistent (e.g., using the same methods to perform a task or subtask). For example, the systems described herein may be configured such that paired data may be measured and/or collected from an individual driving an actual vehicle and using the interface of that vehicle and the same individual in-home, performing a driving proxy task in the same way while using the recorded interface of that vehicle. Ideally, paired data will be collected over time for each individual in a variety of vehicles and a variety of driving contexts over time in either or both simulated and actual environments. Additionally, or alternatively, there may be data pairing between a UI design and an updated UI design (e.g., based on software revisions). The data collected by the sensors may represent user interaction with the UI design and then user interaction with the updated UI design, the user interaction with the UI design and the updated UI design being paired. Additionally, or alternatively, there may be data pairing between vehicle UI. In such embodiments, the processor may be configured to analyze user interaction between two or more analysis periods and separate user interaction with vehicle characteristics from user interaction with vehicle UI.
In general, the data, from one or more sensors, received by the system may be separated into one or more datasets, collectively stored in a database. For example, a dataset may comprise: visual data (e.g., video) of user interface components; metadata of user interface components (e.g., alphanumeric codes for interface elements, relative timing, and information about goal state, including success of the task); visual data (e.g., video) of user interactions; metadata of user interactions (e.g., alphanumeric codes for actions, relative timing, and information about goal state, including success of the task); visual data (e.g., video) of contextual (e.g., environmental); metadata of contextual (e.g., environmental) information. For example, alphanumeric codes paired to various UI elements may be in a look up table. In some instances, video data and associated sensor data may be captured first, encrypted, then moved to another system (e.g., manual or automated system) to structure and create accompanying metadata. These metadata may be used, in conjunction with video, to create semi-autonomous or fully autonomous data structuring approaches. Video data and metadata may be stored separately, associated through alphanumeric codes, the key to which may be stored separately.
In some instances of the present invention, the sensor data, video, and structured metadata may be fed into one or more machine learning models and/or algorithmic models configured to output one or more parameters or states of the user, for example, a posture state, a fatigue state, a visual attentional parameter, an auditory attentional parameter, or a tactile attentional parameter, for example in terms of both the moment and longitudinal temporal characteristics of each. In some instances of the present invention, the sensor data, video, and/or structured metadata may be fed into one or more machine learning models and/or algorithmic models configured to output one or more states of the UI and/or of the context (e.g., objects exterior to the vehicle, activity surrounding the vehicle, etc.). Various combinations of user parameters, user statistics, UI states, and context states may be combined to produce training data for machine learning models associating human and environmental characteristics with design, bidirectionally.
In some embodiments, as the database increases in size, the system may be configured to leverage generative machine learning capabilities to provide extrapolations to outcomes not within the database. For example, the device might modify both the video and metadata of an event to show how an aged user would be expected to respond on-road, in the absence of actual video or metadata recorded of that specific scenario. In such a scenario, the device would tag that combination predicted for future data collection, and upon receiving the data, would use it both to update the prediction and to enhance the prediction machine learning system. Further, by analyzing all extrapolations, the system may be configured to identify which types of data are needed and/or when they are needed.
In some embodiments, specific events, metadata, etc. may be tagged for further analysis. For example, if a driver almost hit another car, the event may be tagged or the sequence of events leading up to the near miss may be tagged, so that annotation could be added describing the experience of the user at that moment. These events could be tagged in the data either by use of a button, a voice command, or other method.
In some embodiments, the output from the system and/or one or more machine learning models is one or more of a candidate design or proposed design or updated design for a UI. In some embodiments, the output is a prediction of an outcome of a user interaction with the UI, for example in different vehicles, different contexts, and/or in different populations of users. The system may be further configured to test permutations of the one or more candidate or proposed designs and determine which permutation(s) lead to more desirable outcomes, as designed by the user. For example, this functionality could be used to evaluate a proposed new automotive interface, evaluate a software update to an existing automotive interface, or evaluate candidate automotive interfaces as compared to one another. The system may be configured to evaluate the impact of singular changes and/or aggregate changes. Said another way, statistical generalizations can be created that may generally hold true across a population of people, across vehicles from a particular manufacturer, etc.
In general, any of the systems described herein may be configured to determine relationships between video and metadata of in vehicle events and/or predict expected outcomes. From a vehicle sales perspective, in some embodiments, a display may be configured to show a possible vehicle purchaser a variety of interfaces similar to the one the possible vehicle purchaser is considering. When linked to a design tool, through a computer aided design package, the system may be configured to associate a UI being designed with pre-existing UI and provide screenshots and video footage of users in the database accessing the UI in actual environments across a variety of contextual situations and user populations.
In general, any of the systems, devices, methods, and/or data described herein may be secured via one or more methods. For example, homomorphic encryption or state-of-the-art cryptology methods may be used to securely transmit and analyze data. In situations necessitating heightened security, such as evaluation of unreleased components or UI, the original video data and associated sensor data can be moved off-line or even deleted.
In some instances, data may be auto formatted into a format more easily processed or that can be consumed by end-users/customers. The data may also be time-locked so that vehicle manufacturers can provide vehicle system data at a later date, or in parallel, via the access method most appropriate to the vehicle technology.
Taken together, the disclosure describes an ecosystem of data collection, structuring these data in an increasingly automated fashion, and providing state predictions. Data collection in-vehicle (actual environment) and in-home (simulated environment) is used to produce a pipeline of data correlated across dimensions of interest which feeds a system which can provide useful predictions to assist in automotive design, and in the prediction of the impact of automotive design. The system is a feedback cycle, extrapolating to situations not yet within its collected data, and filling in those blanks proactively after such an extrapolation. Although “in-home” is used, one of skill in the art will appreciate that a simulated environment may be set up in an office, home, workspace, school, etc.
The sensors 80 employed in the system may be any one or more of: image sensors (e.g., cameras), motion sensors, infrared sensors, LiDAR (light detection and ranging), force sensors, pressure sensors, physiological sensors (e.g., electrodermal sensors, electroencephalography sensors, electromyography sensors), etc. In some embodiments, one or more sensors 80 may be integrated into a wearable worn by the user or in proximity to the user, for example eye tracking glasses, watches, sensorized mats or cushions for on seats, etc. The sensors may be configured to capture user interaction with the actual or simulated UI, the user interaction comprising one or more of: touch, gaze, voice, bodily movement, and posture.
In some embodiments, sensor data from multiple sensors 80 (e.g., images or video from multiple cameras) may be stitched together, for example into a video. The video may be three-dimensional, for example to enable scanning through a scene, annotation of a scene, and/or for analyzing a scene for safety, usability, and sentinel scenarios. As used herein, sentinel scenarios are used to describe scenarios in which analysis is needed to determine the user interactions, contexts, etc. that resulted in a user interaction (e.g., in the vehicle, in a context, with the HMI) being determined to be positive, negative, or unpredicted. Various features, for example the HMI or other contextual features, may be overlaid onto the three-dimensional video. Sensor data, for example from an OBD2 port or CANBUS may also be overlaid and/or used to provide contextual data (e.g., vehicle state information).
In some embodiments, triangulation of gaze or view pinpoint may be performed using a plurality of sensors 80 coupled to a processor 60, such that the triangulation calculation performed by the processor 60 may be compared to or may be used to augment similar calculations from eye tracking glasses or another wearable. The calculations may be used to increase the precision in measuring where and when users are looking.
In some embodiments, the sensor data may be used to create cones of sight and/or visual heat maps to be used for analyzing user interactions in simulated or actual environments, vehicle contextual data, etc. In some embodiments, the cone or visual representation of the visual field of view may be segmented to show where it is in relation to the user's peripheral vision.
The processor 60 may be further configured to determine and/or cause a display to display a point in time when an interaction was initiated or completed, to adjust the pinpoint marking (i.e., an indication of what the system determines the user is looking at) or heatmap highlighting created by the eye tracking glasses, to adjust what/where the person was looking (e.g., adjust based on context, UI specific features, etc.), etc. Such processor 60 functions may also be described as annotating. In other embodiments, such annotating may be performed manually by a user. Annotation of data acquired using the systems and methods described herein may be linked (e.g., based on reference tables, look-up tables, labeling, etc.) to other engineering documents and software and/or fed into machine learning algorithms for additional post-processing.
Further, as shown in
Turning now to
In some embodiments, a processor may be further configured to calculate a performance score 760 for the UI, such score being displayable on the reporting display 700. For example, an onboard processor or remote processor (e.g., server, workstation, mobile device, etc.) is configured to calculate the performance score. Generally, a performance score indicates a quality, safety, and/or usability score for the task or the subtask as a whole on the HMI/UI/GUI. More specifically, the score may represent the relative probability of different UIs producing failing marks relative to an aggregate index of governmental safety guidelines for UIs. For example, in automotive manufacturer testing a new software update could receive a before and after the update score, relative to the safety guidelines they were required to conform to within one or more regions in which the UI would be deployed. Likewise, the score may be used to represent the comparative risk of vehicle models or of sub variance of vehicle models. It may, therefore in some embodiments, be used in the calculation of actuarial values for determining risk (e.g., insurance calculations) or to assist in the decisions made by users as to the relative risk/reward ratio of various vehicle models. As such, the performance score may become a widely recognized score, similar to physical safety scores such as crash test scores, accident-avoidance scores, rollover resistance, roof strength, rear impact protection, etc.
Turning now to
As shown in
In embodiments comprising one display, there may be multiple windows displayed on the display. In embodiments comprising multiple displays, each display may display a unique window. For example, a first window or display may display a distracting task (e.g., touching a periodically moving ball) and a second window or display may display a task for completion on a simulated vehicle UI. The sensors substantially surrounding the user may then be configured to capture user interaction with the UI as he/she completes the task. Any number of tablets, monitors, screens, etc. can be used and re-arranged to simulate different types of vehicle interfaces.
In some embodiments of
Once the simulated environment system is received, a user may begin by downloading an application onto a computing device of the user, which guides the user through setting up the system in an appropriate location and orientation, and connecting and setting up the locations and orientations of any accompanying sensors or control devices. The user is then trained through modules to use the UI while also engaging in a driving proxy task. The UI may display actual center stack material captured in-vehicle by in-vehicle device recordings, as well as prototype UIs designed in the absence of deployment in an actual vehicle. The driving proxy task may include, but not be limited to, various motor control tasks to be performed on the display or through control devices accompanying the system. These include first-person driving games, overhead driving games, and simple tracking tasks in which the user adjusts the motion of a digital object relative to another. Upon the completion of the simulated, e.g., complete a drive while using the UI, the application may guide the user through qualitative questions relative to his/her experience.
Similar onboarding and/or training, as described elsewhere herein, may also be used for mock stations, for example ground stations.
As shown in
Optionally, eye tracking sensors may also be installed in the dash as a ‘hard install’ to the vehicle, while still electrically connected to a local power source and/or local computer readable medium and memory for data storage. One or more sensors may also be positioned on an under carriage of the vehicle, in a wheel well of the vehicle, or otherwise mounted on the vehicle such that the sensors can detect road conditions, for example road type (e.g., gravel, pavement, etc.), slope, grade, condition, moisture, etc.
The one or more sensors positioned within the vehicle and/or on the exterior of the vehicle may be configured to capture additional data (e.g., images, video, sensor data, etc.) indicating a status of the driver and/or contextual data of the driving conditions and environment.
Additionally, or alternatively, one or more sensors may be permanently installed (as opposed to temporary installation as described above) such that data can be collected over time and installation is not repeated.
In embodiments where multiple vehicles in an environment are outfitted with one or more of the systems described herein, interactions between the drivers and/or vehicles on the road, including vehicle to vehicle communication interactions, can be recorded and synchronized.
In either or both actual and simulated environments, the sensors are configured to be positioned to substantially surround a user 260, 360, 460, 560, 660. For example, the sensors may provide an “over-the-shoulder” view, side view, front view, rear review, oblique view, etc. of the user 260, 360, 460, 560, 660 or a UI that the user is interacting with. The user may also be optionally wearing one or more sensors, for example via a wearable system (e.g., eye tracking glasses with sensor 698, watch, etc.). The sensors may be powered by an internal batter or external power source. Data may be recorded to an internal storage medium of the sensor or on external hard drive. Data may be transmitted to a hard drive either wirelessly or via wire.
The sensors 210, 220, 310, 320, 410, 420, 450, 570, 580, 590, 592, 692, 694 may be mountable using telescoping legs, a temporary adhesive, a windshield mount, a rubber (anti-slip) strip, pressure sensitive adhesive, etc.
In some embodiments, user interaction data with the UI is compared and/or correlated to or with contextual data or vehicle sensor data collected by the sensors coupled to the vehicle, which may be correlated in time and/or context.
One or more sensors may be at least partially encased in a housing. The housing may further encase a power source, local storage, local processor, antenna, etc. for functioning of the sensor. Heavy or extensive processing of sensor data may be performed by a second processor remote from the local processor, for example at a server, in vehicle, or other computing device.
In some embodiments, once the sensors are installed and activated (in simulated or actual environment), a user may be directed (e.g., via audio or visual instructions) to download an application to a computing device of the user. Once the application is installed, a communications link to the sensor (e.g., in the same housing) is established wirelessly (e.g., via an antenna) or by cable (e.g., via a databus), and the user is guided through the installation of the sensor in the simulated or actual environment. Once the sensor is secured in the actual or simulated environment, the user may be instructed to orient each sensor, for example to capture a central, side, rear, interior, exterior, etc. view. These cameras provide the ability to record all or a subset of the sides of the simulated or actual environment and provide coverage of at least a portion of the body of the user and the UI. Once the orientation has been achieved, the user is guided through activating the display or UI and moving through various screens of the UI. In some embodiments, the activation process may be recorded by one or more sensors to generate an image of the UI installed in the vehicle, including the current software version. In some embodiments, the user may be further instructed to update or rollback software versions. Finally, in some embodiments, the user may enter a training module to teach the user how to use the system in a simulated environment or in an on-road or actual environment (e.g., while driving the vehicle in which the system is installed).
For actual environments, one or more sensors may be affixed to movable vehicle components, pressure-sensitive vehicle components, etc. For example, an accelerometer for attachment to movable vehicle components (e.g., pedals, wheel, lever) may be wireless (e.g., Bluetooth, near-field communication, etc.) enabled and comprise an attachment mechanism (e.g., rubber strip on one side and adhesive on the opposing side). Further for example, a pressure sensor for attachment to pressure-sensitive vehicle components (e.g., buttons, touchscreens, touchpads, and sliders) may also be wireless ((e.g., Bluetooth, near-field communication, etc.) enabled and comprise an attachment mechanism (e.g., sticker, adhesive, etc.).
Once set up, for example as shown in
To synchronize systems within a vehicle or across vehicles or between a vehicle and infrastructure or in swarms of vehicles or between a simulated and actual environment, the hardware (e.g., sensors, communicatively coupled computing device, display, etc.) may include a ‘start’ and ‘stop’ button that starts or stops all or a subset of the sensors (e.g., camera(s), eye tracking device(s)) in unison or in a sequence. Additionally, or alternatively, an infrared strobe light may be mounted within view of all or a subset of the sensors, such that the infrared strobe light may be used to synchronize sensor data in a vehicle or sensor data across a plurality of vehicles or sensor data between an actual and simulated environment. The same approach (infrared strobe light) may also be used to synchronize sensor data across geographic locations, for example when networked operator interactions for complex command and control actions are in different geographic locations but need to execute tasks as a team. In some embodiments, atomic clocks, the internet, or other referencing or synchronizing technologies known in the art may also be used. For example, the start of the sensor readings may be synchronized using atomic clocks, or the internet, and then the strobes in both locations can help to maintain synchronization between the two sets of data, over long-time intervals, where there is the risk of signals becoming unsynchronized.
In some embodiments, a computer-implemented method of evaluating a control interface based on user interaction comprises various steps as shown in
Turning to
The plurality of parameters may comprise, but not be limited to, one or more of: a total eyes off road time metric, a task completion time, a subtask completion time, or a performance score.
In some embodiments, the actual environment UI is a test UI of a vehicle but could alternatively, or additionally, be a test UI for any type of craft, vehicle, plane, control system, command system, etc.
In some embodiments, the indication comprises a performance indication of a feature of the actual environment UI. In other instances, the indication comprises a recommendation. The recommendations may include, but not be limited to: a UI feature replacement, a UI feature elimination, a subtask modification, or a combination thereof.
The method 800 may optionally include outputting guidance to the user to position the one or more sensors so that the one or more sensors are configured to capture the first user interaction with the displayed actual environment UI.
The method 800 may optionally further include triangulating, using the sensor signals, an eye position of the user; and determining the first user interaction based on the triangulation, wherein the first user interaction is a visual interaction.
The method 800 may optionally include capturing a light interval, with the plurality of cameras, from a periodic light source; and synchronizing, in time, the plurality of cameras based on the captured light interval.
In some embodiments, the displayed actual environment UI comprises a task UI, such the method comprises displaying a task for completion on the task UI. The task may optionally be broken into subtasks that are either predetermined or use selected, for example during the user interaction.
The method 800 may optionally further include receiving a user input of a make and model of the vehicle that is associated with the test UI; and updating the indication based on the user input.
Turning to
The plurality of parameters may comprise, but not be limited to, one or more of: a total eyes off road time metric, a task completion time, a subtask completion time, or a performance score.
In some embodiments, the actual environment UI is a test UI of a vehicle but could alternatively, or additionally, be a test UI for any type of craft, vehicle, plane, control system, command system, etc.
In the method 900 of
Method 900 may optionally further include calculating a plurality of user parameters for the user of the actual environment UI. The plurality of user parameters may include, but not be limited to: a posture, a fatigue, a visual interaction pattern, an auditory interaction pattern, a tactile interaction pattern, or a combination thereof.
Method 900 may optionally further include calculating a readiness of the user for interaction with the actual environment UI based on the calculated plurality of parameters. Any one or more of the displays may present an updated actual environment UI when the readiness of the user exceeds a predetermined threshold. The indication may include a user interface quality indicator of the displayed actual environment UI based on the second user interaction with the displayed actual environment UI.
In some embodiments, the first or the second user interaction comprises one or more of: a user touch, a user look, a user voice, a user bodily movement (arm movement, shoulder movement, etc.), or a user posture.
Method 900 may optionally further include calculating a relative probability that one or both of the displayed simulated environment UI or the actual environment UI meet or exceed a predefined safety threshold.
Turning to
The plurality of parameters may comprise, but not be limited to, one or more of: a total eyes off UI time metric (similar to a total eyes off road time metric), a task completion time, a subtask completion time, or a performance score.
In some embodiments, the actual environment UI is a test UI of a vehicle but could alternatively, or additionally, be a test UI for any type of craft, vehicle, plane, control system, command system, etc.
Method 1000 may optionally further include outputting a prediction of one or more metrics of an actual user interaction with an actual environment UI, that corresponds to the simulated environment UI, based on the calculated plurality of parameters.
Method 1000 may optionally further include calculating a second plurality of parameters for the user of the simulated environment UI. The second plurality of parameters may include, but not be limited to: a posture, a fatigue, a visual interaction pattern, an auditory interaction pattern, a tactile interaction pattern, or a combination thereof.
In some embodiments, the displayed simulated environment UI comprises a distraction UI and a task UI, such the method comprises displaying a task for completion on the task UI while displaying a distraction on the distraction UI. The distraction UI may be configured to display one or more of: simulated weather conditions, simulated road conditions, or simulated location conditions, although this list is non-limiting and may include any conditions (weather, traffic, or otherwise) that a user may encounter while driving.
Since the simulated environment doesn't necessarily include a road, the total eyes off UI metric is a total eye off task UI metric or a total eye on distraction UI metric.
Method 1000 may optionally further include receiving a user input of a make and model of the vehicle that is associated with the test UI; and updating the indication based on the user input.
The systems and methods of the preferred embodiment and variations thereof can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions are preferably executed by computer-executable components preferably integrated with the system and one or more portions of the processor in the vehicle, in the housing comprising one or more sensors, and/or computing device. The computer-readable medium can be stored on any suitable computer-readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (e.g., CD or DVD), hard drives, floppy drives, or any suitable device. The computer-executable component is preferably a general or application-specific processor, but any suitable dedicated hardware or hardware/firmware combination can alternatively or additionally execute the instructions.
Example 1. A computer-implemented method of evaluation a control interface based on user interaction, comprising: displaying, on a display, an actual environment user interface (UI) to a user, wherein the actual environment UI is a test UI of a vehicle; receiving, from one or more sensors, signals indicative of a first user interaction with the displayed actual environment UI; calculating a plurality of parameters of the first user interaction with the actual environment UI based on the received signals indicative of the first user interaction, wherein the plurality of parameters comprise one or more of: a total eyes off road time metric, a task completion time, a subtask completion time, or a performance score; and outputting an indication for the actual user interface.
Example 2. The computer-implemented method of any one of the preceding examples, but particularly Example 1, wherein the indication comprises a performance indication of a feature of the actual environment UI.
Example 3. The computer-implemented method of any one of the preceding examples, but particularly Example 1, wherein the indication comprises a recommendation.
Example 4. The computer-implemented method of any one of the preceding examples, but particularly Example 3, wherein the recommendation comprises a UI feature replacement, a UI feature elimination, a subtask modification, or a combination thereof.
Example 5. The computer-implemented method of any one of the preceding examples, but particularly Example 1, wherein the one or more sensors are integrated into one or more of: eye tracking glasses, microphone, a user-mountable camera, a seat-mountable camera, or a display-mountable camera.
Example 6. The computer-implemented method of any one of the preceding examples, but particularly Example 1, further comprising outputting a guidance to the user to position the one or more sensors so that the one or more sensors are configured to capture the first user interaction with the displayed actual environment UI.
Example 7. The computer-implemented method of any one of the preceding examples, but particularly Example 1, further comprising triangulating, using the signals, an eye position of the user; and determining the first user interaction based on the triangulation, wherein the first user interaction is a visual interaction.
Example 8. The computer-implemented method of any one of the preceding examples, but particularly Example 1, wherein the one or more sensors comprise a plurality of cameras, such that the method further comprises capturing a light interval, with the plurality of cameras, from a periodic light source; and synchronizing the plurality of cameras based on the captured light interval.
Example 9. The computer-implemented method of any one of the preceding examples, but particularly Example 1, wherein the displayed actual environment UI comprises a task UI, such the method comprises displaying a task for completion on the task UI.
Example 10. The computer-implemented method of any one of the preceding examples, but particularly Example 1, further comprising receiving a user input of a make and model of the vehicle that is associated with the test UI; and updating the indication based on the user input.
Example 11. A computer-implemented method of evaluating control interfaces based on user interactions in simulated and actual environments, comprising: displaying, on a display, a simulated environment user interface (UI) to a user, wherein the simulated environment UI represents a test UI of a vehicle; receiving, from one or more sensors, signals indicative of a first user interaction with the displayed simulated environment UI; calculating a plurality of parameters of the simulated environment UI based on the received signals indicative of the first user interaction, wherein the plurality of parameters comprise one or more of: a total eyes off road time metric, a task completion time, a subtask completion time, or a predictive score; displaying, on the display or a second display, an actual environment UI to the user, wherein the displayed actual environment UI corresponds to the displayed simulated environment UI, and wherein the actual environment UI is the test UI of the vehicle; receiving, from one or more sensors, signals indicative of a second user interaction with the displayed actual environment UI; calculating a second plurality of parameters of the actual environment UI based on the received signals indicative of the second user interaction, wherein the second plurality of parameters comprise one or more of: a total eyes off road time metric, a task completion time, a subtask completion time, or a predictive score; comparing the first plurality of parameters to the second plurality of parameters; and outputting an indication for one or both of the simulated or actual environment UI.
Example 12. The computer-implemented method of any one of the preceding examples, but particularly Example 11, further comprising calculating a plurality of user parameters for the user of the actual environment UI.
Example 13. The computer-implemented method of any one of the preceding examples, but particularly Example 12, wherein the plurality of user parameters comprise: a posture, a fatigue, a visual interaction pattern, an auditory interaction pattern, a tactile interaction pattern, or a combination thereof.
Example 14. The computer-implemented method of any one of the preceding examples, but particularly Example 13, further comprising calculating a readiness of the user for interaction with the actual environment UI based on the calculated plurality of parameters.
Example 15. The computer-implemented method of any one of the preceding examples, but particularly Example 14, further comprising displaying, on the display or the second display, an updated actual environment UI when the readiness of the user exceeds a predetermined threshold.
Example 16. The computer-implemented method of any one of the preceding examples, but particularly Example 11, wherein the indication comprises a user interface quality indicator of the displayed actual environment UI based on the second user interaction with the displayed actual environment UI.
Example 17. The computer-implemented method of any one of the preceding examples, but particularly Example 11, wherein the display or the second display comprise a virtual reality headset, a mobile device display, a laptop display, or a desktop display.
Example 18. The computer-implemented method of any one of the preceding examples, but particularly Example 11, wherein the first or the second user interaction comprises one or more of: a user touch, a user look, a user voice, a user bodily movement, or a user posture.
Example 19. The computer-implemented method of any one of the preceding examples, but particularly Example 11, wherein the one or more sensors are integrated into one or more of: eye tracking glasses, microphone, a dashboard-mountable camera, a seat-mountable camera, a vehicle-mountable camera, a user-mountable camera, a display mountable camera, or a display-mountable camera.
Example 20. The computer-implemented method of any one of the preceding examples, but particularly Example 11, further comprising calculating a relative probability that one or both of the displayed simulated environment UI or the actual environment UI meet or exceed a predefined safety threshold.
Example 21. A computer-implemented method of simulating user interaction with control interfaces, comprising: displaying, on a display, a simulated environment user interface (UI) to a user, wherein the simulated environment UI represents a test UI of a vehicle; receiving, from one or more sensors, signals indicative of a first user interaction with the displayed simulated environment UI; calculating a plurality of parameters of the simulated environment UI based on the received signals indicative of the first user interaction, wherein the plurality of parameters comprises: one or more of: a total eyes off UI time metric, a task completion time, a subtask completion time, or a performance score; and outputting an indication for the actual user interface.
Example 22. The computer-implemented method of any one of the preceding examples, but particularly Example 21, further comprising outputting a prediction of one or more metrics of an actual user interaction with an actual environment UI, that corresponds to the simulated environment UI, based on the calculated plurality of parameters, wherein the actual environment UI is the test UI of the vehicle.
Example 23. The computer-implemented method of any one of the preceding examples, but particularly Example 21, further comprising calculating a second plurality of parameters for the user of the simulated environment UI.
Example 24. The computer-implemented method of any one of the preceding examples, but particularly Example 23, wherein the second plurality of parameters comprise: a posture, a fatigue, a visual interaction pattern, an auditory interaction pattern, a tactile interaction pattern, or a combination thereof.
Example 25. The computer-implemented method of any one of the preceding examples, but particularly Example 21, wherein the displayed simulated environment UI comprises a distraction UI and a task UI, such the method comprises displaying a task for completion on the task UI while displaying a distraction on the distraction UI.
Example 26. The computer-implemented method of any one of the preceding examples, but particularly Example 25, wherein the distraction UI is configured to display one or more of: simulated weather conditions, simulated road conditions, or simulated location conditions.
Example 27. The computer-implemented method of any one of the preceding examples, but particularly Example 25, wherein the total eyes off UI metric is a total eye off task UI metric or a total eye on distraction UI metric.
Example 28. The computer-implemented method of any one of the preceding examples, but particularly Example 21, further comprising receiving a user input of a make and model of the vehicle that is associated with the test UI; and updating the indication based on the user input.
As used in the description and claims, the singular form “a”, “an” and “the” include both singular and plural references unless the context clearly dictates otherwise. For example, the term “sensor” may include, and is contemplated to include, a plurality of sensors. At times, the claims and disclosure may include terms such as “a plurality,” “one or more,” or “at least one;” however, the absence of such terms is not intended to mean, and should not be interpreted to mean, that a plurality is not conceived.
The term “about” or “approximately,” when used before a numerical designation or range (e.g., to define a length or pressure), indicates approximations which may vary by (+) or (−) 5%, 1% or 0.1%. All numerical ranges provided herein are inclusive of the stated start and end numbers. The term “substantially” indicates mostly (i.e., greater than 50%) or essentially all of a device, substance, or composition.
As used herein, the term “comprising” or “comprises” is intended to mean that the devices, systems, and methods include the recited elements, and may additionally include any other elements. “Consisting essentially of” shall mean that the devices, systems, and methods include the recited elements and exclude other elements of essential significance to the combination for the stated purpose. Thus, a system or method consisting essentially of the elements as defined herein would not exclude other materials, features, or steps that do not materially affect the basic and novel characteristic(s) of the claimed disclosure. “Consisting of” shall mean that the devices, systems, and methods include the recited elements and exclude anything more than a trivial or inconsequential element or step. Embodiments defined by each of these transitional terms are within the scope of this disclosure.
The examples and illustrations included herein show, by way of illustration and not of limitation, specific embodiments in which the subject matter may be practiced. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. Such embodiments of the inventive subject matter may be referred to herein individually or collectively by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept, if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description.
This application claims the priority benefit of U.S. Provisional Patent Application Ser. No. 63/342,557, filed May 16, 2022, the contents of which are herein incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63342557 | May 2022 | US |