SYSTEMS AND METHODS FOR ACCESSIBLE COMPUTER-USER INTERACTIONS

Abstract
Implementations described herein relate to methods, systems, and computer-readable media for accessible computer-user interactions. For example, a method can include displaying a graphical user interface on a display screen. The graphical user interface includes a virtual assessment, the virtual assessment is representative of an assessment examination, and the graphical user interface further comprises a graphical element that represents a portion of a logical problem of the assessment examination. The method can also include receiving a signal indicative of placement of a physical object onto the display screen above the graphical element, repositioning the graphical element responsive to physical movement of the physical object on the display screen, and generating a portion of an assessment score based at least in part on one or more of: physical movement of the physical object on the display screen, the signal, or the repositioning of the graphical element.
Description
TECHNICAL FIELD

Embodiments relate generally to computer use by individuals with limited accessibility, and more particularly but not exclusively, to methods, systems, and computer readable media for accessible computer-user interactions.


BACKGROUND

Traditional standardized cognitive assessments primarily evaluate content mastery or domain knowledge, processing speed, and memory. The College Entrance Examination Board, now the College Board, was established in 1923 to define a set of college admission standards through the dissemination of the Scholastic Aptitude Test (SAT). In 1959, the American College Test (ACT) was released as an alternative to the SAT. Both the ACT and the SAT focus on standardized content in mathematics, writing, science, and other subject-specific areas to create objective metrics. While widely adopted across the United States, these assessments reveal little about an individual's specific cognitive abilities.


In response to the shortcomings in both the methodology and substance of traditional standardized college admissions tests, employers have adopted other traditional cognitive ability or intelligence tests in an effort to glean more predictive insights on applicants' cognitive profiles. However, these assessments, like standardized admissions tests, also focus on content mastery or domain knowledge, processing speed, and memory. These factors ignore the increasing need to develop and measure capabilities required by the 21st-century workforce.


Though conventional assessment providers may administer digital assessments, their assessments are susceptible to accessibility issues for fully blind or other visually impaired test takers. For example, under the Americans with Disabilities Act (ADA), assessment programs are required to provide accommodations for such test takers. These conventional assessment providers have struggled to resolve accessibility issues for fully blind or other visually impaired test takers in their digital assessments.


The background description provided herein is for the purpose of presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.


SUMMARY

Implementations of this application relate to accessible computer-user interactions. In some implementations, a computer-implemented method comprises: displaying a graphical user interface on a display screen, wherein the graphical user interface comprises a virtual assessment, wherein the virtual assessment is representative of an assessment examination, and wherein the graphical user interface further comprises a graphical element that represents a portion of a logical problem of the assessment examination; receiving a signal indicative of placement of a physical object onto the display screen above the graphical element; repositioning the graphical element responsive to physical movement of the physical object on the display screen; and generating a portion of an assessment score based at least in part on: physical movement of the physical object on the display screen, the signal, and the repositioning of the graphical element.


In some implementations, the graphical user interface further comprises a plurality of graphical elements that each represent a different portion of the logical problem of the educational assessment examination, and wherein the method further comprises: receiving a plurality of signals indicative of placement of respective physical objects onto the display screen above a corresponding graphical element of the plurality of graphical elements; repositioning a graphical element of the plurality of graphical elements responsive to physical movement of a respective physical object; and generating the assessment score based on the plurality of signals and the repositioning of the graphical element of the plurality of graphical elements.


In some implementations, the assessment examination is a test for assessment of a vocational candidate.


In some implementations, the graphical user interface further comprises one or more instructional elements comprising indicia that represent instructions to complete the virtual assessment.


In some implementations, the method further comprises: generating an audio file using a speech synthesizer, wherein the audio file comprises one or more segments of synthesized speech of the instructions to complete the virtual assessment; and audibly playing the audio file.


In some implementations, the method further comprises: generating an audio file using a speech synthesizer, wherein the audio file comprises one or more segments of synthesized speech that describe the logical problem; and audibly playing the audio file responsive to receiving the signal.


In some implementations, the method further comprises: generating an audio file using a speech synthesizer, wherein the audio file comprises one or more segments of synthesized speech that describe a position of the graphical element; and audibly playing the audio file responsive to repositioning the graphical element.


In some implementations, the physical object comprises: a partially conductive base configured to alter capacitance of a portion of the display device; and an upper portion disposed on the partially conductive base, wherein the upper portion is a physical representation of the graphical element, and wherein the physical representation includes at least one or more of: texture that represents the graphical element, features identifiable by touch that represent the graphical element, or braille embossing that describes at least a portion of the graphical element.


In some implementations, the method further comprises periodically audibly playing synthesized speech that describes a state of the virtual assessment.


In some implementations, the method further comprises periodically applying haptic feedback responsive to movement of the physical object on the display screen.


According to another aspect, a system of accessible game-user interactions comprises: a display device configured to display a graphical user interface, the graphical user interface comprises a virtual assessment, wherein the virtual assessment is representative of an assessment examination, and wherein the graphical user interface further comprises a graphical element that represents a portion of a logical problem of the assessment examination; a memory with instructions stored thereon; and a processing device, coupled to the memory and to the display device, wherein the processing device is configured to access the memory and execute the instructions, and wherein the instructions, in response to execution by the processing device, cause the processing device to perform or control performance of operations comprising: displaying the graphical user interface on the display device; receiving a signal indicative of placement of a physical object onto the display device above the graphical element; repositioning the graphical element responsive to physical movement of the physical object on the display screen; and generating an assessment score based on the signal and the repositioning of the graphical element.


In some implementations, the system further comprises an additive or subtractive printing apparatus configured to physically print one or more copies of the physical object.


In some implementations, the operations further comprise: generating a data file representative of a printing sequence of a copy of the physical object; and transmitting the data file to the additive or subtractive printing apparatus.


In some implementations, the physical object comprises: a partially conductive base configured to alter capacitance of a portion of the display device; and an upper portion disposed on the partially conductive base, wherein the upper portion is a physical representation of the graphical element, and wherein the physical representation includes at least one or more of: texture that represents the graphical element, features identifiable by touch that represent the graphical element, or braille embossing that describes at least a portion of the graphical element.


In some implementations, the partially conductive base is a 3D printed base having a conductive layer applied thereon.


In some implementations, the graphical user interface further comprises one or more instructional elements comprising indicia that represent instructions to complete the virtual assessment.


In some implementations, the operations further comprise: generating an audio file using a speech synthesizer, wherein the audio file comprises one or more segments of synthesized speech of the instructions to complete the virtual assessment; and audibly playing the audio file.


In some implementations, the operations further comprise: generating an audio file using a speech synthesizer, wherein the audio file comprises one or more segments of synthesized speech that describe the logical problem; and audibly playing the audio file responsive to receiving the signal.


In some implementations, the operations further comprise: generating an audio file using a speech synthesizer, wherein the audio file comprises one or more segments of synthesized speech that describe a position of the graphical element; and audibly playing the audio file responsive to repositioning the graphical element.


In some implementations, the operations further comprise: periodically audibly playing synthesized speech that describes a state of the virtual assessment; and periodically applying haptic feedback responsive to movement of the physical object on the display screen.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram of an example network environment for accessible computer-user interactions, in accordance with some implementations.



FIG. 2 is a schematic of a graphical user interface (GUI) for accessible computer-user interactions, in accordance with some implementations.



FIG. 3A is an isometric view of a physical object for accessible computer-user interactions, in accordance with some implementations.



FIG. 3B is an elevation view of the physical object of FIG. 3A.



FIG. 4 is a visualization of a virtual assessment implemented with the GUI of FIG. 2 and one or more physical objects, in accordance with some implementations.



FIG. 5 illustrates an example data structure for aggregating user assessment scores, in accordance with some implementations.



FIG. 6 is a flowchart of an example method for accessible computer-user interactions, in accordance with some implementations.



FIG. 7 is a flowchart of an example method for creating accessible computer-user interactions, in accordance with some implementations.



FIG. 8 is a block diagram illustrating an example computing device which may be used to implement one or more features described herein, in accordance with some implementations.





DETAILED DESCRIPTION

One or more implementations described herein relate to accessibility of computers and associated graphical user interfaces (GUIs). Features can include automatically determining a physical placement of a physical object onto a display device, and manipulating computer-generated output such as: audio, haptic feedback, speech synthesis, and other outputs, based upon further movement and placement of the physical object. The physical object can be easily grasped, moved, and interpreted by users who are partially or fully visually impaired, as well as users who have vision affected by other factors.


It may be appreciated that the rise of automation has made content mastery or domain knowledge, processing speed, and memory less relevant features of human cognition in the context of an individual's preparedness for modern work and life. Instead, higher level, complex cognitive abilities, such as problem-solving, creativity, systems thinking, and critical thinking, have become more relevant features that make a difference in the individual's preparedness for modern work and life. In some aspects, systems and methods are provided for a simulation-based assessment that focuses on evaluating how an individual thinks instead of what the individual knows. Scenarios or tasks may be embedded within the simulation-based assessment that abstract the context of a given environment, e.g., a work environment, while maintaining opportunities for a user to portray problem-solving capabilities required by a job. Through scenarios that take place in the simulation-based assessment, details of a user's cognitive processes, not just end choices, may be observed.


Generally, the simulation-based assessment includes one or more scenarios. The scenarios may be based on any assessment's logical problems that are translated into a virtual assessment. According to one example, a natural world abstraction may be used as a scenario. According to other examples, an abstracted non-natural setting may be used as a scenario. Still further, according to other examples, contextual clues referencing logical problems may be used as a scenario. These and other settings may be applicable depending upon an implementation.


Additionally, variation in the distinct views and scenes for different versions of scenarios can be data-driven. Implemented systems may generate distinct views and scenes for different versions of scenarios using the logic associated with potential animals, plants, and terrain features in ways that adhere to human expectations. Further, assets that are informed by the data should be realistic. The system may account for the properties of each asset slotted to populate the scene. For example, in the generated scenario, animals that should swarm, swarm, and animals that should fly, fly, while animals that should mingle and meander, navigate the terrain as they would in real life. Everything less obvious in our daily perception of nature, plants, rocks, or the slope of terrain, may adhere to real-world rules as well so that the background of scenes in scenarios stays in the background of assessment.


As a user interacts with the assessment, information may be recorded regarding how the user approaches the task and/or the processes in which the user solves and engages in during the task. The recorded information may include the user's telemetry data, e.g. mouse movements, clicks, choices, timestamps, and other suitable telemetry data. The user's telemetry data may be analyzed to examine the user's cognitive processes and/or overall performance. In addition to analysis of the user's telemetry data for correct or incorrect answers, the user's telemetry data may be analyzed to understand how the user solved a problem and/or what strategies he or she engaged in to solve the problem. This novel approach to cognitive testing in a given domain, e.g., the hiring domain, may provide an abundance of information to better assess which candidates are likely to succeed at a company.


The system may also implement a consistent calibration method for reconciling predicted complexity for scenarios with actual difficulty distributions to leverage automated item generation at scale across different scenarios. To derive the relationship between computational complexity and difficulty distributions, the system may receive as many reasonable parameters as possible to account for system variable, components of scenarios that our system serves up deterministically, and human variables, what test-takers do in a scenario. Using more data throughout test development and iteration, the system may implement complexity estimator algorithms that get better at approximating human behavior.


Additionally, assessment scores are determined to quantify how a user's actions, timestamps, and performance within each scenario relate to various cognitive constructs. Cognitive science, educational psychology, and learning science theories may guide the mapping of each score to relevant constructs. The scores may focus both on the product (e.g., right or wrong) and on the process (e.g., how did they get there, what choices did they make, how many mistakes did they correct), which is more nuanced than traditional cognitive assessments.


The simulation-based assessments are also susceptible to accessibility issues for fully blind or other visually impaired test takers. In the United States, the ADA requires all state summative assessment programs to provide accommodations for fully blind or other visually impaired test takers. For years, the required accessibility accommodations have often proven too challenging for conventional assessment providers to solve for when attempting to innovate towards next generation item types. This has also prevented conventional assessment providers from building such aspects for use in large-scale assessment programs.


Accordingly, the systems, methods, and apparatuses described herein address accessible game-based assessments for tasks involving intensive computer-user and game-user interactions. In some aspects, to enable fully blind or visually impaired students to partake in game-based assessments, a digital game-based assessment task is combined with physical 3D interactive objects (which may also be referred to as manipulables or manipulatives). For example, to make this task accessible to visually impaired users, a screen reader calling out locations on the game board is combined with the physical 3D interactive objects users can feel and manipulate on a digital game board or GUI presented on a display device or another suitable computing device. A human proctor may initially position the physical 3D interactive objects to represent digital game board tiles or other graphical elements. The visually impaired users may continue to manipulate the physical 3D interactive objects and may be constantly able to feel the tiles and understand the game's state.


For example, while it may be possible to have all change and positions in the game board called out via a screen reader as it is happening simultaneously on the screen, this may overwhelm the student or test taker and do more harm than help with the person's assessment. Instead, the person may be provided with physical 3D interactive objects (or tiles or figurines), e.g., rocks, mountains, grass, etc., to help interact with the game board representative of the natural environment. The proctor or another suitable third party may arrange the tiles look like the initial screen. The person can tactilely feel the tiles and can press down on the tile to read out what is happening on that portion of the GUI. This promotes a “second nature” interaction rather than having the person rely on audio-only feedback. The GUI may present a dynamic scenario where the tiles change over time. While some pieces may be fixed, others may move across the game board. The proctor may setup the tiles in an initial position and person may move the tiles as the game progresses. The person may hear audio, e.g., from a screen reader, and feel the tiles to determine which tile to move and where depending on what is happening on the game board. This is particularly advantageous for fully blind or visually impaired students as each tile or figurine corresponds to the shape of the virtual object, e.g., a mountain, and not merely a certain beep or tone associated with the virtual object. This will also help make the assessment more fair compared to non-visually impaired students taking the same assessment. The tiles may have at least partially conductive portions on a base thereof such that the display device may sense or receive feedback as to positioning of the partially conductive portions. Further, by having digital game boards, the assessment may avoid waste of plastic boards and/or materials for assessments throughout a timeframe (e.g. a testing year) and allow for unique scenarios for different persons being assessed.


In some implementations, a system includes a computing device with a display screen and at least one movable object is provided and is configured to be disposed on the display screen. The position of the movable object on the display screen corresponds to a position of a virtual object in the GUI contemporaneously displayed on the display screen. The position of the virtual object in the GUI is updated when the position of the movable object on the display screen is changed.


As users feel and move their tiles or physical objects, the digital game board makes audio announcements and reacts with haptics to help users have a solid sense of what their action did and where other variables on the game board stand.


As part of the simulation-based assessment, the system may capture telemetry data including a fully blind or other visually impaired person's interactions with the user interface using the movable object. The telemetry data may be used to generate a score for the fully blind or other visually impaired person taking the simulation-based assessment. The telemetry data collected on fully blind or other visually impaired students can capture a subtle trail of data that can be used to create the scores. The system can still capture the student interacting with the layout and clicking, finger movements, screen reader, etc., similar to their visually-able peers. This allows the system to create same or similar types of testing experiences and collect the same or similar data across all persons, disabled or otherwise. The system may use this telemetry data and use an automated scoring algorithm to generate same or similar types of scores for all persons, disable or otherwise.


The simulation-based assessment can be deployed locally in a secure, proctored environment. The simulation-based assessment can also be deployed remotely via timed releases where users may participate across any number of locations. In some implementations, to ensure that no two assessments are the same, artificial intelligence (AI) approaches may be applied to the process of scenario generation. Data-driven properties referenced across different scenarios may be varied in order to build unique versions of those scenarios. Each user who takes the simulation-based assessment may receive a unique task instance that, on the surface, is varied by its individual properties, complexity, and visual design, while structurally every task instance remains consistent in its assessment. While cheating and gaming remains a significant challenge facing many traditional cognitive assessments, the AI and data-driven architecture of the simulation-based assessment may protect against cheating and gaming of the assessment. For example, because each user who takes the simulation-based assessment may receive a unique task instance, it may be harder for a given user taking the simulation-based assessment to benefit from another user's responses to one or more tasks in the simulation-based assessment.


Hereinafter, components of the above-referenced system, methods, and apparatuses are described more fully with reference to FIG. 1.



FIG. 1: System Architecture



FIG. 1 illustrates an example network environment 100, in accordance with some implementations of the disclosure. The network environment 100 (also referred to as “system” herein) includes an online assessment platform 102, a client device 110, and a network 122. The online assessment platform 102 can include, among other things, an assessment engine 104, one or more assessments 105, an accessibility engine 107, and a data store 108. The client device 110 can include a virtual assessment 112, an accessibility application 113, and a display screen 114, to interact with the online assessment platform 102.


Network environment 100 is provided for illustration. In some implementations, the network environment 100 may include the same, fewer, more, or different elements configured in the same or different manner as that shown in FIG. 1.


In some implementations, network 122 may include a public network (e.g., the Internet), a private network (e.g., a local area network (LAN) or wide area network (WAN)), a wired network (e.g., Ethernet network), a wireless network (e.g., an 802.11 network, a Wi-Fi® network, or wireless LAN (WLAN)), a cellular network (e.g., a Long Term Evolution (LTE) network), routers, hubs, switches, server computers, or a combination thereof


In some implementations, the data store 108 may be a non-transitory computer readable storage medium (e.g., random access memory), a cache, a drive (e.g., a hard drive), a flash drive, a database system, or another type of component or device capable of storing data. The data store 108 may also include multiple storage components (e.g., multiple drives or multiple databases) that may also span multiple computing devices (e.g., multiple server computers).


In some implementations, the online assessment platform 102 can include a server having one or more computing devices (e.g., a cloud computing system, a rackmount server, a server computer, cluster of physical servers, virtual server, etc.). In some implementations, a server may be included in the online assessment platform 102, may be an independent system, or may be part of another system or platform.


In some implementations, the online assessment platform 102 may include one or more computing devices (such as a rackmount server, a router computer, a server computer, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a desktop computer, etc.), data stores (e.g., hard disks, memories, databases), networks, software components, and/or hardware components that may be used to perform operations on the online assessment platform 102 and to provide a user with access to online assessment platform 102. The online assessment platform 102 may also include a website (e.g., one or more webpages) or application back-end software that may be used to provide a user with access to content provided by online assessment platform 102. For example, users (or proctors) may access online assessment platform 102 using the accessibility application 113 on client device 110, respectively.


In some implementations, online assessment platform 102 may provide connections between one or more assessment providers and/or employers that allows proctors (e.g., the persons administering an assessment) to communicate with other proctors via the online assessment platform 102, where the communication may include voice chat (e.g., synchronous and/or asynchronous voice communication), video chat (e.g., synchronous and/or asynchronous video communication), or text chat (e.g., synchronous and/or asynchronous text-based communication). In some implementations of the disclosure, a “user” may be represented as a single individual. However, other implementations of the disclosure encompass a “user” (e.g., testing or assessment user) being an entity controlled by a set of users or a group being assessed as to work skills and communication skills. For example, a set of individual users federated as a group being assessed may be considered a “user,” in some circumstances.


In some implementations, online assessment platform 102 may include digital asset and digital assessment generation provisions. For example, the platform may provide administrator interfaces allowing the design, modification, unique tailoring for individuals, and other modification functions. In some implementations, assessments may include two-dimensional (2D) games, three-dimensional (3D) games, virtual reality (VR) games, or augmented reality (AR) games, for example. In some implementations, assessment creators and/or proctors may search for assessments, combine portions of assessments, tailor assessments for particular activities (e.g., group assessments), and other features provided through the assessment platform 102.


One or more physical objects A through N (120-121) can be provided by the online assessment platform 102 in either data file format or direct print format. In some implementations, an assessment 105 can include an electronic file that can be executed or loaded using software, firmware or hardware configured to generate a physical print of the physical object (e.g., additive or subtractive printing apparatus). In some implementations, a virtual assessment 112 may be executed and an assessment 105 proctored in connection with an assessment engine 104. In some implementations, an assessment 105 may have a common set of rules or common goal, and the virtual environments of the assessment 105 share the common set of rules or common goal. In some implementations, different assessments may have different rules or goals from one another.


Using the physical objects 120-121, the user 114 may interact with the virtual assessment 112 such that touch, haptic, audio, and other feedback is provided. As described above, if the virtual assessment includes activates related to a natural environment, the physical objects 120-121 may be representative of a particular portion of the natural environment, such as, for example, trees, grass, mountains, or other environmental features. Through movement of the physical objects 120-121 over the display screen 114 (e.g., using capacitive sensors to determine positioning of the object), the client device 110 may provide an intuitive, natural assessment process by which a person may interact while more aware of the assessment as compared to traditional, audio-only proctoring.


In some implementations, online assessment platform 102 or client device 110 may include the assessment engine 104 or virtual assessment 112. In some implementations, assessment engine 104 may be used for the development or execution of assessments 105. For example, assessment engine 104 may include a rendering engine (“renderer”) for 2D, 3D, VR, or AR graphics, a physics engine, a collision detection engine (and collision response), sound engine, scripting functionality, haptics engine, artificial intelligence engine, networking functionality, streaming functionality, memory management functionality, threading functionality, scene graph functionality, or video support for cinematics, among other features. The components of the assessment engine 104 may generate commands that help compute and render the assessment (e.g., rendering commands, collision commands, physics commands, etc.).


The online assessment platform 102 using assessment engine 104 may perform some or all the assessment engine functions (e.g., generate physics commands, rendering commands, etc.), or offload some or all the assessment engine functions to assessment engine 104 of client device 110 (not illustrated). In some implementations, each assessment 105 may have a different ratio between the assessment engine functions that are performed on the online assessment platform 102 and the assessment engine functions that are performed on the client device 110.


In some implementations, assessment instructions may refer to instructions that allow a client device 110 to render gameplay, graphics, and other features of an assessment, such as a natural world rendering having a logical problem represented therein. The instructions may include one or more of user input (e.g., physical object positioning), character position and velocity information, or commands (e.g., physics commands, rendering commands, collision commands, etc.). The instructions may be audibly prompted by an assessment proctor, audibly presented by a speech synthesizer, physically represented by haptic feedback (e.g., vibration at borders, misalignment, etc.), or a combination of the same.


In some implementations, the client device(s) 110 may each include computing devices such as personal computers (PCs), mobile devices (e.g., laptops, mobile phones, smart phones, tablet computers, or netbook computers), network-connected televisions, gaming consoles, etc. In some implementations, a client device 110 may also be referred to as a “user device.” In some implementations, one or more client devices 110 may connect to the online assessment platform 102 at any given moment. It may be noted that the single client device 110 is provided as illustration, rather than limitation. In some implementations, any number of client devices 110 may be used.


In some implementations, each client device 110 may include an instance of the virtual assessment 112. The virtual assessment 112 may be representative of an educational assessment examination, a vocational assessment examination, or any suitable assessment, whether standardized or uniquely tailored to a particular individual or group of individuals. While described herein as related to visually-impaired persons, it should be understood that the attributes of the physical objects, haptic feedback, speech synthesis, and visual representation of the GUI on the display screen 114 allow any person with at least one functioning hand to at least partially interact with an equivalent test form and therefore perform the assessment 105.


In some implementations, a user may login to online assessment platform 102 via the accessibility application 113. The user may access a user account by providing user account information (e.g., username and password) wherein the user account is associated with one or more assessments 105 of online assessment platform 102. The username and password may also be replaced by photo ID or other identification with assistance from a proctor.


In general, functions described as being performed by the online assessment platform 102 can also be performed by the client device(s) 110, or a server, in other implementations if appropriate. In addition, the functionality attributed to a particular component can be performed by different or multiple components operating together. The online assessment platform 102 can also be accessed as a service provided to other systems or devices through appropriate application programming interfaces (APIs) for customized assessments or private assessments, and thus is not limited to use in websites.


In some implementations, online assessment platform 102 may include an accessibility engine 107. In some implementations, the accessibility engine 107 may be a system, application, or module that provides computer-executable instructions or functions to facilitate use of speech synthesis, haptic feedback, and any other available feature to make assessments more intuitive for visually impaired persons.


As described briefly above, the assessments 105 may be created, modified, and proctored to persons with disabilities. The assessments 105 (and any associated data files representative of the physical objects 120-121) may be stored in data store 108. Additionally, according to one implementation, any data files associated with the physical objects 120-121 may be transmitted to any physical location of assessment such that recreations, copies, or physical prints of the physical objects 120-121 may be made. The assessments may be representative of logical problems to judge or assess a person, and may include a GUI. Hereinafter, a more detailed description of an example GUI is provided with reference to FIG. 2.



FIG. 2: Virtual Assessment GUI



FIG. 2 is a schematic of an example graphical user interface (GUI) 200 for accessible computer-user interactions, in accordance with some implementations. The GUI 200 may include, at a minimum, a playable area 202 and a set of instructions 215. Generally, the instructions 215 may be read aloud by a proctor. Alternatively, or in combination, a speech synthesizer on client device 110 may provide an audio description of the instructions 215, for example, by synthesizing speech portions and playing them back via computer audio. Pre-recorded audio may also be played in some embodiments.


The playable area 202 may be representative of an assessment 105, and may include at least one goal 212 for a logical problem. The goal 212 may be presented in any area of the GUI, and may be different for each person being assessed.


The playable area 202 may include defined sections 206 that may represent different environmental attributes, such as, hilly areas, rocky areas, wet areas, grassy areas, and virtually any environmental attribute to be considered a portion of a logical problem represented by the assessment 105. It is noted that while a natural-world setting is used in this example, any abstracted setting including non-natural, virtual, and purely mathematical abstractions may also be applicable. Furthermore, any non-abstracted setting including non-natural, natural, virtual, and other non-abstracted settings may also be applicable.


During assessment, an objective may include traversing the defined sections 206 such that a player avatar 208 avoids a predator avatar 209 and traverses safely until the goal 212 is reached. While a path 211 is illustrated, it should be understood that the path 211 is a dynamic graphical element that may be altered through placement of physical objects onto the display screen above the GUI 200. For example, graphical elements 204 represent the physical locations of the physical objects 120-121 on the display screen in a virtual manner. By moving physical objects, and repositioning them during gameplay/assessment, the path 211 may adjust, for example, to traverse through the graphical element (if representative of a bridge or grassy area) or avoid the graphical element (if representative of a rocky area or mountain).


As further shown, a reserve area of graphical elements 210 may be provided to display types of physical objects (and therefore virtual graphical elements) that may be placed in the playable area 202. Additionally, other virtual elements 225 may be provided for touch-interaction with a display screen (e.g., fast-forward, enable/disable haptics, pause assessment, request assistance, etc.).


It should be understood that the actual visual characteristics, placement of elements, locations of avatars/playable features, and other aspects of the GUI 200 are dynamic and may be customized for any particular assessment. Therefore, the form illustrated in FIG. 2 is not restrictive of all embodiments, but rather represents a single use-case for aiding in understanding of the aspects of this disclosure.


As described above, physical objects may be manipulated to alter the graphical elements displayed in GUI 200. Hereinafter, a more detailed description of an example physical object is provided with reference to FIG. 3A and FIG. 3B.



FIG. 3: Physical Objects



FIG. 3A is an isometric view of a physical object 120 for accessible computer-user interactions, in accordance with some implementations. FIG. 3B is an elevation view of the physical object 120 for accessible computer-user interactions, in accordance with some implementations. It should be understood that the particular form illustrated in FIGS. 3A and 3B may be varied in many ways. Accordingly, while a particular physical object is illustrated, any physical object may be applicable.


The object 120 may include a partially conductive base 302. The base 302 may be configured to alter capacitance of a portion of a display device, for example, a touch screen. The base 302 may include, for example, applied conductive tape, conductive strips, conductive stickers, conductive paint, or any other conductive material configured to alter the capacitance of a display device such that a computer may sense and determine a location of a physical object in relation to a playable area 202 of a GUI 200.


An upper portion 304 is disposed on the base 302. The upper portion 304 may include a physical representation or features 306 of an associated graphical element 210 (in this example, a mountain). Generally, the physical representation 306 may include one or more of texture, embossing, paint, divots, braille, and/or other features identifiable by touch that represent the graphical element. Additionally, indicia 308 may be imprinted or disposed on the object 120 such that the nature non-visually impaired persons can readily appreciate exactly what the features 306 represent (e.g., if the object does not include visual characteristics such as color or paint, a non-visually impaired user may not be able to identify it without touch).


According to one implementation, the object 120 is a customized 3D printed object that can be represented with a data file stipulating how an additive or subtractive printing apparatus can recreate the object 120. For example, a printing sequence may be described by the data file that allows the printing apparatus to deposit (or remove) layers of material until the physical object 120 is created (or recreated).


The physical objects may be distributed to persons being assessed for use in conducting an assessment, as described more fully below.



FIGS. 4 & 5: Conducting the Assessment



FIG. 4 is a visualization of a virtual assessment implemented with the GUI of FIG. 2 and one or more physical objects 120, in accordance with some implementations. As illustrated, a tablet computer having display device 402 is provided by a person, or provided to the person, being assessed. Upon initialization of the assessment, various physical objects 120 are placed on the display 402 such that they appropriately align with graphical elements 210 of the GUI 200. A proctor or non-visually impaired assistant may help initially place the objects 120.


Once the assessment is begun, a person may touch, feel, manipulate, and move the physical objects on the display 402. Furthermore, other objects 120 may also be placed on the display 402 such that the path 211 is appropriately altered to effectuate assessment of the problem-solving skills of the person.


As a user interacts with the assessment through the device 400, information may be recorded regarding how the user approaches the task and/or the processes in which the user solves and engages in during the task. For example, FIG. 5 illustrates an example data structure 500 for aggregating user assessment scores, in accordance with some implementations. As illustrated, the data structure 500 may be organized as a table, although other formats are acceptable.


The recorded information may include the user's telemetry data, e.g. mouse movements, clicks, choices, timestamps, and other suitable telemetry data. The user's telemetry data may be analyzed and processed via item-level scoring algorithms and psychometric scoring models to make claims about a user's cognitive processes and/or overall performance. In addition to analysis of the user's telemetry data for correct or incorrect answers, the user's telemetry data may be analyzed to understand how the user solved a problem and/or what strategies he or she engaged in to solve the problem. This novel approach to cognitive testing in a given domain, e.g., the hiring domain, may provide an abundance of information to better assess which candidates are likely to succeed at a company.


Additionally, assessment scores are determined to quantify how a user's actions, timestamps, and performance within each scenario relate to various cognitive constructs. Cognitive science, educational psychology, and learning science theories may guide the mapping of each score to relevant constructs. The scores may focus both on the product (e.g., right or wrong) and on the process (e.g., how did they get there, what choices did they make, how many mistakes did they correct), which is more nuanced than traditional cognitive assessments.


Accordingly, for each user 501 being assessed, an individual aggregated assessment score may be created and stored in the table 500 for presentation to a prospective employer after the assessment 105 is completed. The aggregated score may be a simple sum, a weighted average, or any other aggregate that closely reflects an actual assessment of a person's skills, as opposed to their particular knowledge base.


Hereinafter, the methodology of providing accessible computer-user interactions in assessments is described more fully with reference to FIG. 6. FIG. 6: Example method of accessible computer-user interactions



FIG. 6 is a flowchart of an example method for accessible computer-user interactions, in accordance with some implementations. In some implementations, method 600 can be implemented, for example, on a server system, e.g., online assessment platform 102 as shown in FIG. 1. In some implementations, some or all of the method 600 can be implemented on a system such as one or more client devices 110 as shown in FIG. 1, and/or on both a server system and one or more client systems. In described examples, the implementing system includes one or more processors or processing circuitry, and one or more storage devices such as a database or other accessible storage. In some implementations, different components of one or more servers and/or clients can perform different blocks or other parts of the method 600. Method 600 may begin at block 602.


In block 602, a GUI (e.g., GUI 200) is displayed on a display screen (e.g., display screen 402) or display device. The GUI can include a virtual assessment 105 that is representative of an assessment examination. The GUI can also include one or more graphical element(s) (e.g., graphical elements 210) that represents a portion of a logical problem of the assessment. Block 602 may be followed by block 604.


In block 604, a signal indicative of placement of a physical object onto the display screen, above the graphical element, is received. Generally, an initial placement by a proctor may signal the assessment has begun. However, other subsequent placements and/or activation of controls 225 may also indicate the assessment has begun or trigger the beginning of the assessment. Block 604 may be followed by block 606.


In block 606, graphical elements 210 may be repositioned responsive to movement of one or more physical objects that are on the display device. For example, during assessment, a person may physically slide, shift, press, or otherwise manipulate the physical object on the display screen. Upon detecting movement, the method can include repositioning the graphical elements to match the real-world movement. Block 606 may be followed by block 608.


In block 608, at least a portion of an assessment score is accumulated or tabulated based on the signal and the repositioning. For example, as illustrated in FIG. 5, assessment scores may be aggregated as a person completes different portions of the assessment. Furthermore, telemetry data based on shifting, pushing, or otherwise manipulating the physical objects may also be recorded to gauge other aspects of a person's assessment such as speed, agility, comprehension, mistakes, etc. Additionally, as illustrated in FIG. 5, a user's telemetry data (based at least in part of a positioning signal, movement/repositioning graphical elements, and/or interactions with the physical object(s)) may together inform an item-level score, and then many item scores comprise a whole assessment score. Block 608 may be followed by blocks 610 and 612.


At block 610, a determination is made as to whether motion is detected. If motion is detected, haptic feedback may be provided at block 609 followed by repetition or iteration of block 606.


At block 612, (e.g., if no motion was detected) a determination is made as to whether the assessment is complete. If the assessment is not complete, the assessment may continue at blocks 602-610, in any manner of repetition until complete. If the assessment is complete, a final assessment score is generated at block 614.


Blocks 602-614 can be performed (or repeated) in a different order than described above and/or one or more blocks can be omitted, supplemented with further block(s), combined together, modified, etc. Hereinafter, assessment creation is described with reference to FIG. 7.



FIG. 7: Creating Assessments and Content



FIG. 7 is a flowchart of an example method for creating accessible computer-user interactions, in accordance with some implementations. In some implementations, method 700 can be implemented, for example, on a server system, e.g., online assessment platform 102 as shown in FIG. 1. In some implementations, some or all of the method 700 can be implemented on a system such as one or more client devices 110 as shown in FIG. 1, and/or on both a server system and one or more client systems. In described examples, the implementing system includes one or more processors or processing circuitry, and one or more storage devices such as a database or other accessible storage. In some implementations, different components of one or more servers and/or clients can perform different blocks or other parts of the method 700. Method 700 may begin at block 702.


At block 702, a request for an assessment available on an online assessment platform (e.g., online assessment 102) may be received from a user or proctor. For example, the user may utilize a search engine or other interface (e.g., browsing interface) provided by the platform. The list of available items may be stored in the online assessment platform 102, for example, through data store 106, in some implementations. The request may also be embodied as activation or selection of a hyperlink to content available on the online assessment platform (e.g., a link from an external source such as a website, social network, or newsgroup). The hyperlink or “link” may include a direct link or a query including identifying data or other data. Block 702 may be followed by block 704.


At block 704, a particular assessment or content is identified based on the request. For example, if the request is a search query, a list of matching assessments may be returned through a search engine on the online assessment platform. Similarly, a database query may be performed to identify one or more assessments or other content items. Block 704 may be followed by block 706.


At block 706, a physical object data file may be generated. The physical object data file may include a sequence of printing/removal operations necessary to recreate a physical object on a 3D printer/CNC machine, or other manufacturing apparatus. Block 706 may be followed by block 708.


At block 708, a virtual assessment may be generated. For example, and as described above, each individual's assessment may be tailored according to a variety of factors. Thus, the method 700 may facilitate customization through additions, subtractions, combinations, and any other manner. Block 708 may be followed by block 710.


At block 710, the virtual assessment created at block 708 and the physical object data file(s) may be provided to the requestor.


Blocks 702-710 can be performed (or repeated) in a different order than described above and/or one or more blocks can be omitted, supplemented with further block(s), combined together, modified, etc. Methods 600 and/or 700 can be performed on a server (e.g., 102) and/or a client device (e.g., 110). Furthermore, portions of the methods 600 and 700 may be combined and performed in sequence or in parallel, according to any desired implementation.


As described above, the techniques of presenting GUIs of scenarios or tasks may be embedded within the simulation-based assessment that abstract the context of a given environment, e.g., a work environment, while maintaining opportunities for a user to portray problem-solving capabilities required by a job. Through scenarios that take place in the simulation-based assessment, details of a user's cognitive processes, not just end choices, may be observed. The aggregation of the assessment score including a multitude of telemetry data bolsters the applicability of the assessment to determine how capable a person is in solving particular problems, working in groups, following spoken instructions from another person or computer, and/or other attributes which may be representative of a job or vocation.


Hereinafter, a more detailed description of various computing devices that may be used to implement different devices illustrated in FIG. 1 is provided with reference to FIG. 8.



FIG. 8 is a block diagram of an example computing device 800 which may be used to implement one or more features described herein, in accordance with some implementations. In one example, device 800 may be used to implement a computer device, (e.g., online assessment platform 102 and client device 110 of FIG. 1), and perform appropriate method implementations described herein. Computing device 800 can be any suitable computer system, server, or other electronic or hardware device. For example, the computing device 800 can be a mainframe computer, desktop computer, workstation, portable computer, or electronic device (portable device, mobile device, cell phone, smart phone, tablet computer, television, TV set top box, personal digital assistant (PDA), media player, game device, wearable device, etc.). In some implementations, device 800 includes a processor 802, a memory 804, input/output (I/O) interface 806, and audio/video input/output devices 814 (e.g., display screen, touchscreen, display goggles or glasses, audio speakers, microphone, etc.), haptic devices 821 (e.g., actuators, vibrating motors, solenoids, etc.), and/or audio devices 823 (e.g., speech synthesis devices and/or speakers).


Processor 802 can be one or more processors and/or processing circuits to execute program code and control basic operations of the device 800. A “processor” includes any suitable hardware and/or software system, mechanism or component that processes data, signals or other information. A processor may include a system with a general-purpose central processing unit (CPU), multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing need not be limited to a particular geographic location, or have temporal limitations. For example, a processor may perform its functions in “real-time,” “offline,” in a “batch mode,” etc. Portions of processing may be performed at different times and at different locations, by different (or the same) processing systems. A computer may be any processor in communication with a memory.


Memory 804 is typically provided in device 800 for access by the processor 802, and may be any suitable processor-readable storage medium, e.g., random access memory (RAM), read-only memory (ROM), electrical erasable read-only memory (EEPROM), flash memory, etc., suitable for storing instructions for execution by the processor, and located separate from processor 802 and/or integrated therewith. Memory 804 can store software operating on the server device 800 by the processor 802, including an operating system 808, an assessment engine application 810 and associated data 812. In some implementations, the assessment engine application 810 can include instructions that enable processor 802 to perform or control performance of the functions described herein, e.g., some or all of the methods of FIGS. 6 and 7. In some implementations, the assessment engine application 810 may also include one or more machine learning models for generating new assessments, new digital assets, new virtual representations of physical objects, and for providing user interfaces and/or other features of the platform, as described herein.


For example, memory 804 can include software instructions for an assessment engine 810 that can provide assessments including intuitive haptic feedback and audio speech synthesis. Any of software in memory 804 can alternatively be stored on any other suitable storage location or computer-readable medium. In addition, memory 804 (and/or other connected storage device(s)) can store instructions and data used in the features described herein. Memory 804 and any other type of storage (magnetic disk, optical disk, magnetic tape, or other tangible media) can be considered “storage” or “storage devices.”


I/O interface 806 can provide functions to enable interfacing the server device 800 with other systems and devices. For example, network communication devices, storage devices (e.g., memory and/or data store 108), and input/output devices can communicate via interface 806. In some implementations, the I/O interface can connect to interface devices including input devices (keyboard, pointing device, touchscreen, microphone, camera, scanner, etc.) and/or output devices (display device, speaker devices, printer, motor, etc.).


For example, an additive or subtractive printing apparatus 825 may be controlled through the I/O interface 806. Suitable printing apparatuses may include 3D printers using resin, filament, or other additive techniques. Suitable subtractive processes may include CNC lathes, mills, and/or routers.


For ease of illustration, FIG. 8 shows one block for each of processor 802, memory 804, I/O interface 806, software blocks 808 and 810, and database 812. These blocks may represent one or more processors or processing circuitries, operating systems, memories, I/O interfaces, applications, and/or software modules. In other implementations, device 800 may not have all of the components shown and/or may have other elements including other types of elements instead of, or in addition to, those shown herein. While the online assessment platform 102 is described as performing operations as described in some implementations herein, any suitable component or combination of components of online assessment platform 102 or similar system, or any suitable processor or processors associated with such a system, may perform the operations described.


A user device can also implement and/or be used with features described herein. Example user devices can be computer devices including some similar components as the device 800, e.g., processor(s) 802, memory 804, and I/O interface 806. An operating system, software and applications suitable for the client device can be provided in memory and used by the processor. The I/O interface for a client device can be connected to network communication devices, as well as to input and output devices, e.g., a microphone for capturing sound, a camera for capturing images or video, audio speaker devices for outputting sound, a display device for outputting images or video, or other output devices. A display device within the audio/video input/output devices 814, for example, can be connected to (or included in) the device 800 to display images pre- and post-processing as described herein, where such display device can include any suitable display device, e.g., an LCD, LED, or plasma display screen, CRT, television, monitor, touchscreen, 3-D display screen, projector, or other visual display device. Some implementations can provide an audio output device, e.g., voice output or synthesis that speaks text.


The methods, blocks, and/or operations described herein can be performed in a different order than shown or described, and/or performed simultaneously (partially or completely) with other blocks or operations, where appropriate. Some blocks or operations can be performed for one portion of data and later performed again, e.g., for another portion of data. Not all of the described blocks and operations need be performed in various implementations. In some implementations, blocks and operations can be performed multiple times, in a different order, and/or at different times in the methods.


In some implementations, some or all of the methods can be implemented on a system such as one or more client devices. In some implementations, one or more methods described herein can be implemented, for example, on a server system, and/or on both a server system and a client system. In some implementations, different components of one or more servers and/or clients can perform different blocks, operations, or other parts of the methods.


One or more methods described herein (e.g., methods 600 and/or 700) can be implemented by computer program instructions or code, which can be executed on a computer. For example, the code can be implemented by one or more digital processors (e.g., microprocessors or other processing circuitry), and can be stored on a computer program product including a non-transitory computer readable medium (e.g., storage medium), e.g., a magnetic, optical, electromagnetic, or semiconductor storage medium, including semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), flash memory, a rigid magnetic disk, an optical disk, a solid-state memory drive, etc. The program instructions can also be contained in, and provided as, an electronic signal, for example in the form of software as a service (SaaS) delivered from a server (e.g., a distributed system and/or a cloud computing system). Alternatively, one or more methods can be implemented in hardware (logic gates, etc.), or in a combination of hardware and software. Example hardware can be programmable processors (e.g. field-programmable gate array (FPGA), complex programmable logic device), general purpose processors, graphics processors, application specific integrated circuits (ASICs), and the like. One or more methods can be performed as part of or component of an application running on the system, or as an application or software running in conjunction with other applications and operating system.


One or more methods described herein can be run in a standalone program that can be run on any type of computing device, a program run on a web browser, a mobile application (“app”) executing on a mobile computing device (e.g., cell phone, smart phone, tablet computer, wearable device (wristwatch, armband, jewelry, headwear, goggles, glasses, etc.), laptop computer, etc.). In one example, a client/server architecture can be used, e.g., a mobile computing device (as a client device) sends user input data to a server device and receives from the server the final output data for output (e.g., for display). In another example, all computations can be performed within the mobile app (and/or other apps) on the mobile computing device. In another example, computations can be split between the mobile computing device and one or more server devices.


Although the description has been described with respect to particular implementations thereof, these particular implementations are merely illustrative, and not restrictive. Concepts illustrated in the examples may be applied to other examples and implementations.


In situations in which certain implementations discussed herein may obtain or use user data (e.g., user demographics, user behavioral data during an assessment, etc.) users are provided with options to control whether and how such information is collected, stored, or used. That is, the implementations discussed herein collect, store and/or use user information upon receiving explicit user authorization and in compliance with applicable regulations.


Users are provided with control over whether programs or features collect user information about that particular user or other users relevant to the program or feature. Each user for which information is to be collected is presented with options (e.g., via a user interface) to allow the user to exert control over the information collection relevant to that user, to provide permission or authorization as to whether the information is collected and as to which portions of the information are to be collected. In addition, certain data may be modified in one or more ways before storage or use, such that personally identifiable information is removed. As one example, a user's identity may be modified (e.g., by substitution using a pseudonym, numeric value, etc.) so that no personally identifiable information can be determined. In another example, a user's geographic location may be generalized to a larger region (e.g., city, zip code, state, country, etc.).


Note that the functional blocks, operations, features, methods, devices, and systems described in the present disclosure may be integrated or divided into different combinations of systems, devices, and functional blocks. Any suitable programming language and programming techniques may be used to implement the routines of particular implementations. Different programming techniques may be employed, e.g., procedural or object-oriented. The routines may execute on a single processing device or multiple processors. Although the steps, operations, or computations may be presented in a specific order, the order may be changed in different particular implementations. In some implementations, multiple steps or operations shown as sequential in this specification may be performed at the same time.

Claims
  • 1. A computer-implemented method of accessible computer-user interactions, the method comprising: displaying a graphical user interface on a display screen, wherein the graphical user interface comprises a virtual assessment, wherein the virtual assessment is representative of an assessment examination, and wherein the graphical user interface further comprises a graphical element that represents a portion of a logical problem of the assessment examination;receiving a signal indicative of placement of a physical object onto the display screen above the graphical element;repositioning the graphical element responsive to physical movement of the physical object on the display screen; andgenerating a portion of an assessment score based at least in part on one or more of: physical movement of the physical object on the display screen, the signal, and the repositioning of the graphical element.
  • 2. The computer-implemented method of claim 1, wherein the graphical user interface further comprises a plurality of graphical elements that each represent a different portion of the logical problem of the educational assessment examination, and wherein the method further comprises: receiving a plurality of signals indicative of placement of respective physical objects onto the display screen above a corresponding graphical element of the plurality of graphical elements;repositioning a graphical element of the plurality of graphical elements responsive to physical movement of a respective physical object; andgenerating the assessment score based on the plurality of signals and the repositioning of the graphical element of the plurality of graphical elements.
  • 3. The computer-implemented method of claim 1, wherein the assessment examination is a test for assessment of a vocational candidate.
  • 4. The computer-implemented method of claim 1, wherein the graphical user interface further comprises one or more instructional elements comprising indicia that represent instructions to complete the virtual assessment.
  • 5. The computer-implemented method of claim 4, further comprising: generating an audio file using a speech synthesizer, wherein the audio file comprises one or more segments of synthesized speech of the instructions to complete the virtual assessment; andaudibly playing the audio file.
  • 6. The computer-implemented method of claim 1, further comprising: generating an audio file using a speech synthesizer, wherein the audio file comprises one or more segments of synthesized speech that describe the logical problem; andaudibly playing the audio file responsive to receiving the signal.
  • 7. The computer-implemented method of claim 1, further comprising: generating an audio file using a speech synthesizer, wherein the audio file comprises one or more segments of synthesized speech that describe a position of the graphical element; andaudibly playing the audio file responsive to repositioning the graphical element.
  • 8. The computer-implemented method of claim 1, wherein the physical object comprises: a partially conductive base configured to alter capacitance of a portion of the display device; andan upper portion disposed on the partially conductive base, wherein the upper portion is a physical representation of the graphical element, and wherein the physical representation includes at least one or more of: texture that represents the graphical element, features identifiable by touch that represent the graphical element, or braille embossing that describes at least a portion of the graphical element.
  • 9. The computer-implemented method of claim 1, further comprising periodically audibly playing synthesized speech that describes a state of the virtual assessment.
  • 10. The computer-implemented method of claim 1, further comprising periodically applying haptic feedback responsive to movement of the physical object on the display screen.
  • 11. A system of accessible game-user interactions, the system comprising: a display device configured to display a graphical user interface, the graphical user interface comprises a virtual assessment, wherein the virtual assessment is representative of an assessment examination, and wherein the graphical user interface further comprises a graphical element that represents a portion of a logical problem of the assessment examination;a memory with instructions stored thereon; anda processing device, coupled to the memory and to the display device, wherein the processing device is configured to access the memory and execute the instructions, and wherein the instructions, in response to execution by the processing device, cause the processing device to perform or control performance of operations comprising:displaying the graphical user interface on the display device;receiving a signal indicative of placement of a physical object onto the display device above the graphical element;repositioning the graphical element responsive to physical movement of the physical object on the display screen; andgenerating a portion of an assessment score based at least in part on one or more of:physical movement of the physical object on the display screen, the signal, or the repositioning of the graphical element.
  • 12. The system of claim 11, further comprising an additive or subtractive printing apparatus configured to physically print one or more copies of the physical object.
  • 13. The system of claim 12, wherein the operations further comprise: generating a data file representative of a printing sequence of a copy of the physical object; andtransmitting the data file to the additive or subtractive printing apparatus.
  • 14. The system of claim 11, wherein the physical object comprises: a partially conductive base configured to alter capacitance of a portion of the display device; andan upper portion disposed on the partially conductive base, wherein the upper portion is a physical representation of the graphical element, and wherein the physical representation includes at least one or more of: texture that represents the graphical element, features identifiable by touch that represent the graphical element, or braille embossing that describes at least a portion of the graphical element.
  • 15. The system of claim 14, wherein the partially conductive base is a 3D printed base having a conductive layer applied thereon.
  • 16. The system of claim 11, wherein the graphical user interface further comprises one or more instructional elements comprising indicia that represent instructions to complete the virtual assessment.
  • 17. The system of claim 16, wherein the operations further comprise: generating an audio file using a speech synthesizer, wherein the audio file comprises one or more segments of synthesized speech of the instructions to complete the virtual assessment; andaudibly playing the audio file.
  • 18. The system of claim 11, wherein the operations further comprise: generating an audio file using a speech synthesizer, wherein the audio file comprises one or more segments of synthesized speech that describe the logical problem; andaudibly playing the audio file responsive to receiving the signal.
  • 19. The system of claim 11, wherein the operations further comprise: generating an audio file using a speech synthesizer, wherein the audio file comprises one or more segments of synthesized speech that describe a position of the graphical element; andaudibly playing the audio file responsive to repositioning the graphical element.
  • 20. The system of claim 11, wherein the operations further comprise: periodically audibly playing synthesized speech that describes a state of the virtual assessment; andperiodically applying haptic feedback responsive to movement of the physical object on the display screen.
CROSS-REFERENCE TO RELATED PATENT APPLICATIONS

The present application claims the benefit of priority to U.S. Provisional Patent Application Ser. No. 63/013,348, entitled “Systems and Methods for Accessible Game-User Interactions,” filed on Apr. 21, 2020, and U.S. Provisional Application Ser. No. 63/013,314, entitled “Systems and Methods for Accessible Game-Based Scenarios,” filed on Apr. 21, 2020, the entire contents of each of which are hereby incorporated by reference herein.

PCT Information
Filing Document Filing Date Country Kind
PCT/US21/28401 4/21/2021 WO
Provisional Applications (2)
Number Date Country
63013348 Apr 2020 US
63013314 Apr 2020 US