This disclosure generally relates to augmented reality and virtual reality systems. More specifically, this disclosure relates to an apparatus and method for assessing and tracking user competency in augmented/virtual reality-based training in industrial automation systems and other systems.
Augmented reality and virtual reality technologies are advancing rapidly and becoming more and more common in various industries. Augmented reality generally refers to technology in which computer-generated content is superimposed over a real-world environment. Examples of augmented reality include games that superimpose objects or characters over real-world images and navigation tools that superimpose information over real-world images. Virtual reality generally refers to technology that creates an artificial simulation or recreation of an environment, which may or may not be a real-world environment. An example of virtual reality includes games that create fantasy or alien environments that can be explored by users.
This disclosure provides an apparatus and method for assessing and tracking user competency in augmented/virtual reality-based training in industrial automation systems and other systems.
In a first embodiment, a method includes receiving one or more records containing commands, an association of the commands with visual objects in an augmented reality/virtual reality (AR/VR) space, and an AR/VR environment setup. The commands correspond to user actions taken in the AR/VR space. The method also includes analyzing the user actions based on the one or more records and assessing the user actions based on the analysis.
In a second embodiment, an apparatus includes at least one processing device configured to receive one or more records containing commands, an association of the commands with visual objects in an AR/VR space, and an AR/VR environment setup. The commands correspond to user actions taken in the AR/VR space. The at least one processing device is also configured to analyze the user actions based on the one or more records and assess the user actions based on the analysis.
In a third embodiment, a method includes receiving data defining user actions associated with an AR/VR space. The method also includes translating the user actions into associated commands and identifying associations of the commands with visual objects in the AR/VR space. The method further includes aggregating the commands, the associations of the commands with the visual objects, and an AR/VR environment setup into at least one record. In addition, the method includes transmitting the at least one record for assessment of the user actions.
In a fourth embodiment, an apparatus includes at least one processing device configured to perform the method of the third embodiment or any of its dependent claims. In a fifth embodiment, a non-transitory computer readable medium contains instructions that when executed cause at least one processing device to perform the method of the first embodiment or any of its dependent claims. In a sixth embodiment, a non-transitory computer readable medium contains instructions that when executed cause at least one processing device to perform the method of the third embodiment or any of its dependent claims.
Other technical features may be readily apparent to one skilled in the art from the following figures, descriptions, and claims.
For a more complete understanding of this disclosure, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which:
In conventional training and skill development environments, trainee competency is often assessed using questionnaires, pictorial/image/Flash-based evaluations, or multiple objective questions. These standard assessment techniques no longer suffice due to the lack of validating a trainee's skills or performance determined interactively “on the job.” Also, an assessment of a user's abilities is often based on end results, and it can be difficult to suggest improvement opportunities quickly.
With growing augmented/virtual reality solutions for skill development and training, in the absence of any external monitoring mechanism, it is typically difficult to monitor and assess the progress of a trainee and the impact of training in augmented/virtual space. Ideally, a system could validate a user's skills by tracking each user action and thereby assess the user's competency and real-world problem solving skills.
This disclosure provides techniques for tracking and assessing an industrial automation user or other user's actions in an augmented/virtual environment, which overcomes challenges with respect to tracking unwanted steps, tracking impacts on underlying industrial systems or other systems, assessing intermediate steps, performing behavioral assessments, and identifying responses to panic situations or other situations. Among other things, this disclosure describes a portable file format that captures content such as user inputs, data formats, and training setups. The portable file format allows for easier storage, computation, and distribution of content and addresses technical constraints with respect to space, computation, and bandwidth.
The architecture 100 also includes at least one processor, such as in a server 110, that is used to record content. The server 110 generally denotes a computing device that receives content from the training environment 102 and records and processes the content. The server 110 includes various functions or modules to support the recording and processing of training or other interactive content. Each of these functions or modules could be implemented in any suitable manner, such as with software/firmware instructions executed by one or more processors. The server 110 could be positioned locally with or remote from the training environment 102.
Functionally, the server 110 includes a user input receiver 112, which receives, processes, and filters user inputs made by the user. The user inputs could include any suitable inputs, such as gestures made by the user, voice commands or voice annotations spoken by the user, textual messages provided by the user, or pointing actions taken by the user using a pointing device (such as a smart glove). Any other or additional user inputs could also be received. The user inputs can be filtered in any suitable manner and are output to an input translator 114. To support the use of the architecture 100 by a wide range of users, input variants (like voice/text in different languages) could be supported. The user input receiver 112 includes any suitable logic for receiving and processing user inputs.
The input translator 114 translates the various user inputs into specific commands by referring to a standard action grammar reference 116. The grammar reference 116 represents an actions-to-commands mapping dictionary that associates different user input actions with different commands. For example, the grammar reference 116 could associate certain spoken words, text messages, or physical actions with specific commands. The grammar reference 116 could support one or multiple possibilities for commands where applicable, such as when different commands may be associated with the same spoken words or text messages but different physical actions. The grammar reference 116 includes any suitable mapping or other association of actions and commands. The input translator 114 includes any suitable logic for identifying commands associated with received user inputs.
The input translator 114 outputs identified commands to an aggregator 118. The aggregator 118 associates the commands with visual objects in the AR/VR space being presented to the user into one or more records 120. The aggregator 118 also embeds an AR/VR environment setup into the one or more records 120. The AR/VR environment setup can define what visual objects are to be presented in the AR/VR space. The records 120 therefore associate specific commands (which were generated based on user inputs) with specific visual objects in the AR/VR space as defined by the environment setup. The aggregator 118 includes any suitable logic for aggregating data.
The records 120 are created in a portable file format, which allows the records 120 to be used by various other devices. For example, the data in the records 120 can be processed to assess the user's skills and identify whether additional training might be needed. This can be accomplished without requiring the transport of larger data files like video files. The portable file format could be defined in any suitable manner, such as by using XML or JSON.
The records 120 could be used in various ways. In this example, the records 120 are provided (such as via a local intranet or a public network like the Internet) to a cloud computing environment 122, which implements various functions to support analysis of the records 120 and assessment of the user. Note, however, that the analysis and assessment functions could be implemented in other ways and need not be performed by a cloud computing environment. For instance, the analysis and assessment functions could be implemented using the server 110.
As shown in
Records 120 from the API 124 or the database 126 can be provided to an action validator 128, which has access to one or more sets of validation rules 130. Different sets of validation rules 130 could be provided, such as for different types of users, different types of equipment, or different types of operational scenarios. The validation rules 130 can therefore be configurable in order to provide the desired functionality based on the user actions being evaluated. The action validator 128 processes one or more records 120 based on the appropriate set of validation rules 130. The action validator 128 can also receive and use feedback from system software 132, which generally denotes software used to control one or more industrial processes (such as EXPERION software from HONEYWELL INTERNATIONAL INC. or safety system software) or other processes. The feedback can be used to verify whether an expected or desired outcome was achieved by the user. Based on this information, the action validator 128 determines a result for each action or group of actions taken by the user and identified in the record(s) 120. Example results could include correct, partially correct, wrong, invalid, or damaging. The action validator 128 includes any suitable logic for evaluating user actions.
An assessment engine 134 uses the results from the action validator 128 to generate an assessment for the user. The assessment could take any suitable form, such as a pass/fail score for each action or collection of actions, reward points, or any other measurement for each action or collection of actions. The assessment engine 134 includes any suitable logic for assessing a user's competencies.
The measurements from the assessment engine 134 can be provided to a learning management system (LMS) 136. The user can be enrolled in the LMS 136 for competency development, and the LMS 136 can use the measurements to identify areas where the user is competent and areas where the user may require further training. An analytics engine 138 could use the measurements from the assessment engine 134, along with past historical performance of the user over a period of time, to gain insights into the user's competencies. The analytics engine 138 could then recommend training courses to help improve the user's skills. The LMS 136 includes any suitable logic for interacting with and providing information to users for training or other purposes. The analytics engine 138 includes any suitable logic for analyzing user information and identifying training information or other information to be provided to the user.
Based on this, the following process could be performed using the various components of the server 110 in
Moreover, based on this, the following process could be performed using the various components of the cloud computing environment 122 in
In this way, the architecture 100 can be used to capture and store users' actions in AR/VR environments. As a result, data associated with the AR/VR environments can be easily captured, stored, and distributed in the records 120. Other devices and systems can use the records 120 to analyze the users' actions and possibly recommend training for the users. The records 120 can occupy significantly less space in memory and require significantly less bandwidth for transmission, reception, storage, and analysis compared to alternatives such as video/image recording. These features can provide significant technical advantages, such as in systems that collect and analyze large amounts of interactive data related to a number of AR/VR environments.
This technology can find use in a number of ways in industrial automation settings or other settings. For example, control and safety systems and related instrumentations used in industrial plants (such as refinery, petrochemical, and pharmaceutical plants) are often very complex in nature. It may take a lengthy period of time (such as more than five years) to train new system maintenance personnel to become proficient in managing plant and system upsets independently. Combining such long delays with a growing number of experienced personnel retiring in the coming years means that industries are facing acute skill shortages and increased plant upsets due to the lack of experience and skill.
Traditional classroom training, whether face-to-face or online, often requires personnel to be away from the field for an extended time (such as 20 to 40 hours). In many cases, this is not practical, particularly for plants that are already facing resource and funding challenges due to overtime, travel, or other issues. Also, few sites have powered-on and functioning control hardware for training. Due to the fast rate of change for technology, it may no longer be cost-effective to procure and maintain live training systems.
Simulating control and safety system hardware in the AR/VR space, building dynamics of real hardware modules in virtual objects, and interfacing the AR/VR space with real supervisory systems (such as engineering and operator stations) can provide various benefits. For example, it can reduce or eliminate any dependency on real hardware for competency management. It can also “gamify” the learning of complex and mundane control and safety system concepts, which can help to keep trainees engaged. It can further decrease the time needed to become proficient in control and safety system maintenance through more hands-on practice sessions and higher retention of the training being imparted.
This represents example ways in which the devices and techniques described above could be used. However, these examples are non-limiting, and the devices and techniques described above could be used in any other suitable manner. In general, the devices and techniques described in this patent document could be applicable whenever one or more user actions in an AR/VR space are to be recorded, stored, and analyzed (for whatever purpose).
Although
As shown in
The memory 210 and a persistent storage 212 are examples of storage devices 204, which represent any structure(s) capable of storing and facilitating retrieval of information (such as data, program code, and/or other suitable information on a temporary or permanent basis). The memory 210 may represent a random access memory or any other suitable volatile or non-volatile storage device(s). The persistent storage 212 may contain one or more components or devices supporting longer-term storage of data, such as a read only memory, hard drive, Flash memory, or optical disc.
The communications unit 206 supports communications with other systems or devices. For example, the communications unit 206 could include a network interface card or a wireless transceiver facilitating communications over a wired or wireless network (such as a local intranet or a public network like the Internet). The communications unit 206 may support communications through any suitable physical or wireless communication link(s).
The I/O unit 208 allows for input and output of data. For example, the I/O unit 208 may provide a connection for user input through a keyboard, mouse, keypad, touchscreen, or other suitable input device. The I/O unit 208 may also send output to a display, printer, or other suitable output device.
Although
As shown in
Information defining an AR/VR environment setup is received at step 304. This could include, for example, the processing device 302 of the server 110 receiving information identifying the overall visual environment of the AR/VR space being presented to the user by the user device 104-108 and information identifying visual objects in the AR/VR space being presented to the user by the user device 104-108.
Information defining user actions associated with the AR/VR environment is received at step 306. This could include, for example, the processing device 302 of the server 110 receiving information identifying how the user is interacting with one or more of the visual objects presented in the AR/VR space by the user device 104-108. The interactions could take on various forms, such as the user making physical gestures, speaking voice commands, speaking voice annotations, or providing textual messages. This information is used to detect, track, and filter the user actions at step 308. This could include, for example, the processing device 302 of the server 110 processing the received information to identify distinct gestures, voice commands, voice annotations, or textual messages that occur. This could also include the processing device 302 of the server 110 processing the received information to identify visual objects presented in the AR/VR space that are associated with those user actions.
The user actions are translated into commands at step 310. This could include, for example, the processing device 302 of the server 110 using the standard action grammar reference 116 and its actions-to-commands mapping dictionary to associate different user actions with different commands. Specific commands are associated with specific visual objects presented in the AR/VR space at step 312. This could include, for example, the processing device 302 of the server 110 associating specific ones of the identified commands with specific ones of the visual objects presented in the AR/VR space. This allows the server 110 to identify which visual objects are associated with the identified commands.
At least one file is generated that contains the commands, the associations of the commands with the visual objects, and the AR/VR environment setup at step 314. This could include, for example, the processing device 302 of the server 110 generating a record 120 containing this information. The at least one file is output, stored, or used in some manner at step 316. This could include, for example, the processing device 302 of the server 110 providing the record 120 to the API 124 for storage in the database 126 or analysis by the action validator 128.
As shown in
Applicable validation rules are obtained at step 404. This could include, for example, the processing device 302 implementing the action validator 128 obtaining one or more sets of validation rules 130. The validation rules 130 could be selected in any suitable manner. Example selection criteria could include the type of activity being performed by the user in the AR/VR space, the type of user being evaluated, the type of equipment being simulated in the AR/VR space, or the type of operational scenario being simulated in the AR/VR space.
One or more actions or group of actions identified by the received file are analyzed using the selected validation rules at step 406, and results assessing the user's actions are determined at step 408. This could include, for example, the processing device 302 implementing the action validator 128 using the validation rules to determine whether the user performed correct or incorrect actions within the user's AR/VR space. This could also include the processing device 302 implementing the action validator 128 determining whether the desired outcome or result was obtained by the user as a result of the user actions within the user's AR/VR space. In some cases, the action validator 128 can use feedback, such as from one or more devices used for industrial process control, to determine whether the user's actions would have resulted in the desired outcome or result.
The user can be informed of the results at step 410. This could include, for example, the action validator 128 providing the results to the LMS 136 for delivery to the user. The results can also be analyzed to determine whether the user might require or benefit from additional training at step 412, and the user can be informed of any additional training opportunities at step 414. This could include, for example, the processing device 302 implementing the analytics engine 138 analyzing the user's current results and possibly the user's prior results in order to recommend relevant training courses that might benefit the user. This could also include the analytics engine 138 providing the results to the LMS 136 for delivery to the user.
Although
In some embodiments, various functions described in this patent document are implemented or supported by a computer program that is formed from computer readable program code and that is embodied in a computer readable medium. The phrase “computer readable program code” includes any type of computer code, including source code, object code, and executable code. The phrase “computer readable medium” includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory. A “non-transitory” computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals. A non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable storage device.
It may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The terms “application” and “program” refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer code (including source code, object code, or executable code). The term “communicate,” as well as derivatives thereof, encompasses both direct and indirect communication. The terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation. The term “or” is inclusive, meaning and/or. The phrase “associated with,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, have a relationship to or with, or the like. The phrases “at least one of” and “one or more of,” when used with a list of items, mean that different combinations of one or more of the listed items may be used, and only one item in the list may be needed. For example, “at least one of: A, B, and C” includes any of the following combinations: A, B, C, A and B, A and C, B and C, and A and B and C.
The description in the present application should not be read as implying that any particular element, step, or function is an essential or critical element that must be included in the claim scope. The scope of patented subject matter is defined only by the allowed claims. Moreover, none of the claims invokes 35 U.S.C. § 112(f) with respect to any of the appended claims or claim elements unless the exact words “means for” or “step for” are explicitly used in the particular claim, followed by a participle phrase identifying a function. Use of terms such as (but not limited to) “mechanism,” “module,” “device,” “unit,” “component,” “element,” “member,” “apparatus,” “machine,” “system,” “processor,” or “controller” within a claim is understood and intended to refer to structures known to those skilled in the relevant art, as further modified or enhanced by the features of the claims themselves, and is not intended to invoke 35 U.S.C. § 112(f).
While this disclosure has described certain embodiments and generally associated methods, alterations and permutations of these embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not define or constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure, as defined by the following claims.
This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 62/517,006, U.S. Provisional Patent Application No. 62/517,015, and U.S. Provisional Patent Application No. 62/517,037, all filed on Jun. 8, 2017. These provisional applications are hereby incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
62517006 | Jun 2017 | US | |
62517015 | Jun 2017 | US | |
62517037 | Jun 2017 | US |