This technology relates to an electronic performance evaluation systems. More specifically, the technology relates to formative feedback acquisition and analytics systems for performance assessments.
For many years, teaching methods have remained the same: an instructor imparts information to students through lecture or discussion and then tests the students on their understanding of that information. Studies show that these teaching methods tend to be passive and linear and do not assure student knowledge or comprehension. Effective learning requires integration of different methodology and assessment at multiple levels, including discussions, modeling, and practical exercises.
Feedback is an essential component in learning contexts and serves a variety of purposes including evaluation of student achievement, development of student competencies, and understanding and promotion of student motivation and confidence. Within teaching and learning activities, students perceive feedback as information communicated to the learner as a result of a learning-oriented action. Feedback strategies include both the content of feedback itself and the method used to communicate the feedback to students. Communication of feedback is important since the method selected may discourage or draw student's attention in the feedback process. In order to be effective, the manner in which feedback is communicated to the student must ensure student engagement with the content.
Formative assessment is specifically intended to generate feedback on performance to improve and accelerate learning. Knowing how students think in the process of learning makes it possible for instructors to help their students overcome conceptual difficulties and, in turn, improve their learning. Good feedback practice can help students clarify what good performance means, facilitate the development of reflection in learning, and deliver high quality information to students about their learning and competency. Feedback based on formative assessment is closely connected to instruction and provides information about how to improve performance. Feedback given as part of formative assessment helps learners to achieve their goals. Further, students can be instructed and trained in how to interpret feedback, how to make connections between the feedback and the characteristics of the work they produce, and how they can improve their work in the future.
In a clinical healthcare environment, patient safety and quality of care outcomes have garnered wide scope attention across all facets and disciplines. Dental educators face a huge societal burden due to the responsibility of determining how and when a dental student has achieved professional clinical competency, which includes the complex ability to perform independent, unsupervised dental practice.
The American Dental Education Association (ADEA) defines competency by the following behaviors: (a) synthesis of knowledge; (b) experience; (c) critical thinking and problem solving skills; (d) professionalism; (e) ethical values; and (f) technical and procedural skills. As a result of ADEA's advisory and educational policy role in dental education, there is a push for competency based-education (CBE) of dental students, which poses a challenge regarding the best practices approach for specific and accurate assessment methods.
Non-graded formative feedback is critical to establishing competence in any dental education program that strives for true CBE: most recorded daily grades in dental education clinical programs are a point of contention as they have a tendency to be either very subjective or centered down the middle of the grading scale, which is most likely inaccurate and non-specific. The advantage of a longitudinal formative feedback evaluation system is that it can deliver a “big picture appraisal of a student's overall competence” rather than competence at snapshots in time.
Today's educational classrooms rely upon technology to expand the boundaries of the classroom so that students can learn anytime, anywhere. The Internet provides an inexpensive and fast service for the delivery of content, peer collaboration, and accessibility to new teaching methods. To use technology effectively for learning, the learning process must be dynamic, active, and interactive. Instructors should identify desired results, determine acceptable evidence of performance, and plan learning experiences and instruction. Courses and courses of study can be developed based upon desired results, goals, or standards and then the course can be built from evidence of learning called for by established educational standards.
Past efforts to provide an electronic assessment and reporting system that provides usable formative feedback have fallen short. Previous systems focused exclusively on the educational content of the learning exercises or the manner of providing feedback without successfully integrating the two. These previous systems and methods were primarily interested in recording summative assessments (e.g., a learner received an “A” grade, got 75% on a test score, or scored a 3 on a task) which captured snapshots of competence and provided a learner little guidance to improve. Any formative feedback recorded usually came in the form of free text input by a teacher. Subsequently, these systems had difficulty in acquiring and analyzing meaningful feedback over time. They were inadequate in recording formative feedback, compiling the results into actionable observations, and analyzing and distributing the results.
Analysis of a learner's accumulated observations is difficult, time intensive, and prone to clerical error because the formative feedback is not standardized. More importantly, recording free text can be arduous (requiring a great deal of time) and/or not uniform (e.g., lexicon between teachers is different), decreasing the overall likelihood of the feedback ever getting recorded and used. Without specific areas to improve and a method to track identified areas, a learner cannot effectively advance toward competency.
Performance competence cannot be fully measured using stand-alone, snapshot, summative assessments like multiple choice exams and one-time examinations. For example, in the healthcare environment, practitioner competence can be more effectively measured through a longitudinal means, with many evaluations from multiple sources focusing on qualitative metrics (e.g., constructive criticism to improve weakness and praise to note strengths) as opposed to quantitative metrics (e.g., receiving a C− or a 100%). Formative feedback—defined as information communicated to the learner that is intended to modify thinking or behavior for the purpose of advancing the learner toward competency—is especially important to tracking a practitioner's competency. Even though educators acknowledge the importance of this information, this information is difficult to acquire and even harder to make sense of. Performing formative feedback sessions, compiling the results, and analyzing the results is time-consuming and resource intense.
The claimed invention addresses shortcomings in prior systems by standardizing formative feedback into keywords, streamlining the feedback recording process to seconds, and delivering real-time, analyzed results to teachers and learners. The claimed invention provides systems and methods that go beyond previous efforts by providing feedback on a formative assessment that is timely, constructive, motivational, personal, management, and directly related to assessment criteria and learning outcomes. The invention acquires, compiles, analyzes, and reports formative feedback evaluations. One example implementation of the invention includes an iOS formative feedback application that provides capabilities beyond previous systems by interpreting and framing pertinent comments into keywords, thereby cutting the time it takes evaluators to input this data to seconds. The invention applies advanced analytics to the collected evaluation data and displays the results in an intuitive, real-time, graphical dashboard to administrators. The invention provides a comprehensive electronic formative feedback system that addresses the assessment loop, allowing administrators to efficiently track, assess, and, if necessary, intervene in matters related to competency.
The invention is true to the principles of competency tracking through time, and the systems and methods of the invention can be customized to different clinical, business, educational, manufacturing, service, and other environments. Performance improvement plans, peer-to-peer evaluations, SWOT analyses—these items and more benefit from the support of formative feedback integrated into their processes and managed with the systems and methods of the invention.
The invention delivers solutions and eliminates the resource-intense endeavor by providing a learner with just-in-time feedback and appropriate intervention given today's budgetary constraints, diminished resources, and faculty and supervisor numbers. The invention provides an efficient and effective system of recording all respective data points that translate into the “big picture” for each learner/student. The systems and methods provide more than just a snapshot evaluation and instead create individual longitudinal track records for both technical and formative metrics.
In one example implementation, the invention provides a longitudinal, FERPA (Family Educational Rights and Privacy Act) compliant, mobile-based health professional formative feedback system. Input from end-users is kept at a minimum (e.g., 5 button presses or less), and the feedback provided is robust. The interface is an agile and accommodates record keeping of teaching moments in all dental medicine learning environments—preclinical, clinical, and CBDE (Community Based Dental Education). The system provides real-time tracking of a student's performance through the curriculum, allowing faculty to observe student trends and assess the results of interventions. The invention enables user friendly, meaningful, on demand tracking of an individual's progression to attainment of competency without increasing administrative overhead.
The invention advances the state of electronic learning environments and assessment systems by converting and framing pertinent comments into keywords which can have positive or negative connotations. The invention uses mobile technology and workflow optimization to reduce feedback acquisition time and provides on-demand analytics to acquired feedback and real-time display of the results on mobile devices.
One example implementation of the formative feedback and evaluation system of the invention includes a formative feedback server and a formative feedback database. The formative feedback server receives a user file from an administrator computer. The user file includes an evaluator account, an administrator user level, and an evaluator user level. The user account and/or user level can be received via an optical label, such as a QR code.
The formative feedback server receives a keyword file and/or a category file and/or a performance ratings file from the administrator computer. The formative feedback server also receives a survey framework for a formative feedback evaluation from the administrator computer. The survey framework includes formatted questions for an evaluator.
The formative feedback database stores any of the user file, keyword file, category file, and performance ratings file. The formative feedback server appends the survey framework to include user bibliographic information, keywords, categories, and performance ratings from the respective user file, keyword file, category file, and performance ratings file and delivers the appended survey framework to an evaluator computer. The keyword file can include standardized keywords and/or key phrases. Additionally, the keyword file can be created to include a neutral connotation keyword data file spreadsheet generated by an evaluating organization and describing assessment aspects of a performance task.
The survey framework can include formatted questions based upon the keywords organized by the evaluation categories and provides a plurality of performance ratings indicators. The survey framework can be stored in the formative feedback database as a survey application. The survey framework application can be a web based survey application that runs inside a browser. The web-based survey application can run on an evaluator computer inside a browser. The survey framework embeds account credentials for evaluators and evaluates into the survey framework. The evaluator's (client) computer can scan an optical label to populate the survey framework.
The formative feedback server can receive a scan of an optical label from an evaluator computer and respond by further embedding bibliographic information of an evaluatee and/or procedural information of a task to be demonstrated by the evaluatee into the survey framework and sending the updated survey framework to the evaluator computer.
The evaluator computer sends a completed survey framework to the formative feedback server and to a dashboard computer where it is stored and used for analytics. For example, the formative feedback dashboard computer receives entered feedback from an evaluator computer and stores the entered feedback as an evaluation file, and the formative feedback server simultaneously receives the entered feedback and stores the entered feedback as an evaluation file in the formative feedback database.
The formative feedback and evaluation system also provides many analytics capabilities. For example, the survey framework can be a mobile computer application framework that securely displays an un-indexed URL. The un-indexed URL can transmits and receive embedded text fields within the URL to ensure integrity of evaluations while allowing cross-platform access and data communication from servers. The system can include a formative feedback dashboard computer that receives and consolidates evaluation data received from the mobile computer application framework. The formative feedback dashboard computer can apply scripted processes to the received data to provide data update intervals, user access levels, data calculations, data filtering, and dynamic graphical displays.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
The invention provides a framework for providing feedback regarding a formative assessment. The invention creates a background structure that enables timely, constructive, motivational, and personal reactions directly related to assessment criteria and learning outcomes. The invention acquires and analyzes evaluation phrases and compiles keywords, clinical categories, and ratings, including ranges, positive and negative reviews, trends over time, free text comments, and other evaluation metrics. The invention receives evaluator notations indicative of the proficiency of a student/evaluatee/learner performing a task. The invention creates feedback reports from the formative feedback evaluations and provides a host of analytics to help both the evaluator and the student understand and assess the student's proficiency and competence for the tasks/skills they perform.
System Architecture and Process Overview
As shown in
The system 100 includes administrator computer 110, iFF server 120, client side computer 130, and iFF dashboard display device 140. The system components communicate through network 199, such as the Internet or other computer communication networks, for example.
As shown in
The administrator computer 110 receives input from different evaluators and takes the input to establish areas (e.g., practice areas, names of procedures, timing of procedures, and other considerations related to establishing the core and ancillary competencies of the evaluatees/students/learners.
iFF Server 120 provides functionality for other programs and devices, including client side mobile computer 130. iFF server 120 provides services to client side computer 130 and to administrator computer 110 and iFF dashboard display computer device 140. iFF server 120 shares data and resources among multiple clients and performs computations for the clients. iFF server 120 includes iFF database 125. For example, in one implementation of the invention, the iFF server 120 is a SQL database server.
While
The system 100 can also be implemented on a computer system or systems that extend across any network environment using any suitable interface mechanisms and communications technologies including, for example telecommunications in any suitable form (e.g., voice, modem, and the like), Public Switched Telephone Network (PSTNs), Packet Data Networks (PDNs), the Internet, intranets, and combinations of the above.
For clarity and brevity,
Additionally, in block 1120, the administrator computer 110 creates standardized keywords, evaluation categories, and/or ratings (e.g., numerical ranges, indicated levels of proficiency, positive/negative, pass/fail, and other types of performance ratings.) of interest for organization. The manner in which the administrator computer 110 creates standardized keywords is detailed below with regard to
Once the administrator computer 110 creates the standardized keywords, evaluation categories, and ratings, the administrator computer 110 transfers the keywords, categories and ratings to the iFF server 120. The iFF server 120 stores the keywords as a keyword data file in the keyword database. The administrator computer 110 inputs these keywords directly into the web-based survey application, which is exported to the iFF database 125 and dashboard computer 140 via a CSV (comma separated values) file, as one example. In one example implementation of the invention, the keywords are exported from the web-based survey application through an API, through a manual export, or as a text entry process facilitated by an administrator. Similarly, the iFF server 120 stores the created categories as a category data file in the categories database, and the ratings as a ratings file in a ratings database. The respective keyword database, categories database, and ratings database can be partitioned from a single storage medium or can be located alongside each other in one physical computer system or can be geographically separated in different computers, different buildings, different cities, and different countries. For simplicity, in the example system 100 shown in
In addition to the ratings files, category files, and keyword files, the Administrator computer 110 generates a survey framework for the evaluation based on the ratings files, category files, and keyword files. For example, in one implementation of the invention, the survey framework includes formatted questions based on the keywords organized by the created categories where an evaluator will select a rating to characterize a student's proficiency at a particular task. The administrator computer 110 sends the survey framework to the iFF server 120, where it is stored in iFF database 125 as a survey application at a URL. The survey application can be a web-based survey application, for example, that embeds additional data from files stored in iFF database 125 or elsewhere as the individual evaluations are compiled. In one example implementation, the web-based survey application is an HTML5 form application (e.g., similar to Google Forms, Survey Monkey, and other forms) which can be customized by an administrator. The web-based survey application is displayed within the iFF mobile application through an embedded web viewer. User credentials are input into the iFF mobile application through scanning a valid QR code, for example. These credentials are checked against the information housed in the iFF server 120 and a subsequent URL is generated with the user credentials embedded within the URL itself. This URL is hidden from the users as a security feature. In addition to one example implementation of the invention using a web-based survey app running inside a browser, the application can also be client-based, where part of the program is downloaded to the client side computer 130, but processing is done over the network 199 on the iFF server 120.
The system 100 creates individual evaluations using the survey application as a framework. The survey application imports a range of questions (e.g., Likert scale, multiple choice, true/false, fill-in-the-blank, and other types of question ranges) generates an unindexed URL, and embeds text into the form. The survey application generates an unindexed URL for security purposes. Because the invention utilizes embedded text fields within the URL itself to pass information from the iFF server 120 to the survey application, publicizing this URL could compromise the integrity of the assessments being used in a particular deployment and could, potentially, allow any user to enter unregulated data into the iFF system 100. While the URL is un-indexed for maximum security, it also needs to be accessible to any user with the address, ensuring maximum compatibility within the wide range of mobile products on the market today. For example, in one implementation of the invention, the iFF system 100 utilizes a Qualtrics survey platform. Other web-based survey applications that allow users to easily create and manage survey forms with differing question types (e.g., Likert scales, multiple choice, heat map based questions, etc.) can also be used. Web-based survey applications that can publish un-indexed URLs which support embedded text fields, have an API which can export data directly to the iFF Servers 120, and are user-friendly yet robust in their scalability and ability to adapt to different organizations and different methods of evaluation.
In block 1125, iFF Server 120 embeds account credentials for the evaluators and the evaluatees/students/learners into the survey application and stored at a secure URL. Once the system 100 makes the account credentials part of the survey application, the system 100 provides the secure URL to the client-side computer 130 in block 1130.
The system 100 takes advantage of the portability and mobility of the client side computer 130 to move about and change locations depending upon the location of the evaluation. In some example implementations of the invention, client side computer 130 is a mobile device, such as a tablet, smart phone, or other mobile computing device. When client-side computer 130 is a mobile device, the URL is displayed securely in a client side mobile app. The client side mobile app is a computer program that performs a group of coordinated functions, tasks, or activities for the user. The client side mobile app is an application optimized for mobile devices that provides the ability to check evaluator and learner credentials with the iFF server 120, scan QR codes, and display URLs without revealing the physical address to the user.
To begin an evaluation or to otherwise record an encounter where an evaluator observes and documents performance of an evaluatee demonstrating a particular behavior or skill, the evaluator logs in to the client side application and accesses the survey application from iFF server 120 via network 199 as noted in block 1135. The login credentials of the evaluator provide access to one or more survey applications from the iFF server 120.
The evaluator can select an appropriate survey application and then enter evaluatee information into the survey application. In one example implementation of the invention, the evaluator enters the evaluate information by scanning a QR code of the evaluatee as shown in block 1140. The QR code provides bibliographic information regarding the evaluatee as well as additional information such as the task to be performed, the location of the procedure, and other information relevant to the task to be demonstrated. For example, in one example implementation of the invention to evaluate dental students and provide formative feedback regarding dental procedures the students perform, the QR code provides patient information, dental equipment information, and other data relevant to a dental procedure to be performed.
Once the evaluator scans the QR code, the code is sent to the iFF server in block 1145, and in block 1150 the application survey receives (from iFF server 120) the files stored by the administrator computer 110 on iFF server 120 (and iFF database 125) that include the bibliographic, procedure, location, and other data related to the behavior or skill that the evaluate will demonstrate and that the evaluator will evaluate. The scanned QR code, created by the administrator for each user with all the embedded information necessary to identify and categorizer the individual prepopulates fields in the application survey based and its validity is checked against the credentials stored on iFF server 120.
As the evaluatee performs the behavior or skill (e.g., dental procedure), in block 1155 the evaluator observes the procedure, scans the evaluatee's QR code which opens the iFF mobile application's secure web browser prepopulated with embedded user credentials from the QR code (which is also validated against the iFF database). The embedded data is communicated to the mobile application through the URL. Based on the generated URL with user credentials, the web-based survey application displays the keywords, categories, and ratings (e.g., ranges, pos/neg, etc.) stored in the iFF server 120 and iFF database 125 that were used to populate the survey application above. In one example implementation of the invention, the entered data is stored within the web-based application itself, the iFF database 125, or as a CSV file on an administrator's computer 110. In one example of the dental use case, this information is stored within the web-based application and then automatically synchronized with the iFF database 125.
As the evaluator enters feedback into the survey application, in block 1160 the feedback is sent in real-time to the iFF server 120 where it is stored in iFF database 125. The feedback is simultaneously sent to iFF dashboard computer 140 in real-time in block 1165. iFF dashboard computer 140 collates, analyzes, and distributes the feedback data to other users. For example, in a case of a dental student performing a dental procedure, the feedback from the evaluator is sent to iFF server 120 as well as to peer review groups, other dental evaluators, and the evaluatee. The iFF dashboard computer 140 provides a graphical, web-based application that automatically acquires data from the survey application and stores the survey (feedback) data and ratings. The acquisition and storage processes can be scheduled to periodically move stored data from one point in the workflow to another (i.e., from one device or computer to another). For example, data stored within the framework of the web-based survey application needs to be moved to the iFF dashboard computer 140 for analysis. The frequency with which the data transfer of the survey data happens can be customized for every use case. In one example implementation of the invention, the formative feedback system 100 leverages the survey framework API to export data in a CSV (comma separated values) format to the iFF dashboard computer 140. The iFF dashboard computer 140 stores the received export data and configures the export data as dashboards using visualizations to tell the story of the survey data, and therefore the evaluation. The dashboards provide a user interface to organize and display formative feedback. For example, in one implementation of the invention, the iFF dashboard computer 140 modifies basic Microsoft Power BI dashboard files to organize and display the formative feedback. The Microsoft Power BI dashboard takes data from multiple sources (e.g., SQL databases, Oracle databases, CSVs, XLS, JSON, and other data sources), applies programmed queries to the consolidated data, and displays the information as an HTML5 web-page. The file format used by the invention modifies the Microsoft Power BI PBIX format. In one example implementation of the invention, data is scheduled to be exported and updated once a day. In other implementations, the data is scheduled to be exported and updated after every evaluation is completed.
The iFF dashboard computer 140 also stores the feedback data while applying security to the stored data. The iFF dashboard computer collates the data in a number of different predetermined fashions (outlined further below) and displays the resulting feedback information according to row-level credentials to appropriate users. User accounts and security levels are established by administrator computer 110 when establishing the user accounts (e.g., evaluator and evaluatee accounts, peer review accounts, and other party accounts) as described above. The system 100 provides formative feedback to the interested parties in a customizable intuitive fashion as outlined below with regard to the iFF dashboard and metrics section.
Formative Feedback Keywords
As outlined above, the administrator computer 110 receives input from evaluators regarding the content and characteristics of the procedure/skill that an evaluatee will perform. Formative feedback is difficult and time consuming to record and analyze due to the variable nature of comments. Different evaluators often utilize synonymous terms to describe the same sentiment. Breaking down these comments to make them useful takes many hours and interpretation. Consequently, displaying this information in real-time is nearly impossible.
As further shown in
The assessment comments and assessment phrases and skill descriptions provided by evaluators often relate to specific steps performed when carrying out a task (e.g., a particular dental procedure) or relate to the environment in which the task is performed (e.g., individual categories of patients) or to overarching organizational goals (e.g., a focus of a particular practice is on exceptional bedside manner). The administrator computer 110 receives the comments, phrases, and descriptions and is tasked with parsing the feedback into keywords, which hold importance to an organization. Because the demands of each area of expertise and expectations of each organization/task are different, the exact metrics and parsing strategies are customized and determined on a use case by use case basis. The iFF system 100 is optimized to record standardized formative feedback, but there are no barriers to it recording other kinds of feedback (e.g. summative feedback), metrics (e.g. number of procedures done), or media (e.g. photos, soundbites, etc.).
In one example implementation of the invention, the administrator computer 110 receives comments, phrases, and descriptions and parses those data files using previously acquired academic data and established standards from CODA, the Commission on Dental Accreditation, which is a national organization that grants accreditation to educational institutions that wish to give degrees within the dental field. CODA provides each accredited dental institution with clear standards regarding evaluation tasks that must be reviewed, evaluated, and tracked for accreditation to be maintained. These standards were evaluated by multiple administrators, surveys were given to academicians within the institution to gauge what qualities were critical components in dental education, and consolidated into 4 meaningful categories: Preparation, Process, Procedure, and Professionalism. Preparation is a user's ability to ready themselves for a given dental encounter. Process is a user's adherence to established procedure and protocols. Procedure is the technical performance on a dental procedure. Professionalism is a user's conduct in relation to the individuals within the given dental encounter. The administrators then parsed evaluation comments and criteria to create (for example, 8 to 20) neutral keywords which described qualities within these categories. For example, some keywords within the Preparation category are: Armamentarium, Detail Oriented, Evidence-Based, Infection Control, Informed Consent, and Knowledgeable. Displayed strengths or weaknesses within these keywords indicate competency or lack thereof in Preparation.
In block 210, the administrator computer 110 generates user QR codes as outlined above. In block 214, the evaluator determines that a procedure requires assessment, and in block 218, the evaluator observes the performance of an evaluate performing the procedure/task. The evaluator records observed keywords based on evaluatee's performance in block 222.
In block 226, the evaluator and the student determine that the procedure requires self-assessment by the student, and the student records keywords indicative of her performance in block 230. In block 234, the evaluator and the student review aggregated evaluator and self assessments and optimize student performance based on formative feedback from the assessments in block 238. For example, a faculty member (i.e., evaluator) indicates that a student's “Use of Resources” was not optimal while the student followed “Infection Control” protocols well. The evaluator and the student can them optimize the student's performance by discussing and reviewing improvement opportunities for those skills in the procedure that were not optimal and can review the student's high-levels of achievement and competence in those skills in the procedure on which the student performed well. This efficient, standardized, and granular acquisition of comments allows a user to capture the essence of an encounter without not impeding their productivity.
Additionally, in block 242, the evaluator and the evaluate review and edit keywords and key phrases used in the formative feedback survey to improve the assessments and to provide more meaningful evaluation of skills and procedures. Additional key words and key phrases, as well as edits to existing key words and key phrases are provided to the administrator computer for use on subsequent formative feedback assessments. Reviewing and revising the assessment criteria helps improve overall institutional outcomes.
As outlined above, because the demands of each to-be-evaluated area of expertise and the expectations of each organization and task are different, the exact metrics are determined on a use case by use case basis. Typically, the administrator (computer) assesses all the feedback which are currently available from evaluators, identifies the evaluation criteria selected by their organization, task, evaluators, etc. as important, and then generates neutral descriptive terms (i.e., keywords and/or key phrases) which describe these areas using parsing rules and truncation based upon evaluation guidelines provided by the organization, evaluator(s), and credentialing bodies. Often, the system 100 uses truncation and parsing rules generated directly by evaluators. For example, in the example implementation shown in
The examples of key words and key phrases shown in
Optimization of Assessment Workflow
The formative feedback system of the invention minimizes error and effort in the feedback acquisition process. The system utilizes QR codes or other optical labels, including matrix bar codes that include data and information regarding the object to which they are attached. The formative feedback system of the invention save both evaluators and evaluatees time, relieving users of the need to manually enter bibliographic information of the evaluatee and the skill or task that the evaluatee is about to perform. This time savings provides an important benefit in large organizations where many individuals (e.g., evaluatees/learners/students) are evaluated at any time. With the formative feedback system of the invention, evaluators tap, scan, and evaluate. With the time saved on each individual feedback session, evaluators are able to spend the majority of their time providing feedback to the evaluatees rather than inputting credentials and selecting the individual to be evaluated. This is in stark contrast to other assessment systems currently available. Existing systems require at least two to three minutes to record any assessment. With the systems and methods of the invention, the process takes less than twenty seconds to record an evaluator's feedback and less than a minute for the system to process the feedback information and generate analytics to interpret the collected data to make meaningful observations.
For example, an evaluator can access the formative feedback system of the invention and conduct the evaluation, feedback, and analytics review on a digital device, such as a smart phone, computer, tablet, and other computing devices.
As outlined above with regard to the system components in
Additional evaluation criteria are accessed by scrolling through the list. See
As shown pictorially in
On-Demand Dashboard Analytics
Because the data acquired is standardized, robust reporting is possible through the use of dashboard technology. Advanced, custom analytics are applied to the evaluation data, modified to each individual administrator's needs, and then displayed in real-time on mobile and desktop platforms. This enables the formative feedback system of the invention to empower users to close the assessment loop by showing them pertinent information succinctly at any time to guide the decision making process. Additionally, the data can be analyzed from multiple perspectives in an upstream and downstream manner, resulting in real-time 360-degree assessments without increasing administrative overhead or user time consumption.
The iFF dashboard computer 140 provides a visualization of the collected evaluation data to provide a picture of the evaluatee and the evaluatee's competence in performing the skills upon which they were evaluated. The iFF dashboard computer 140 provides a customizable web-based application which applies trimming of data, concatenation of columns, calculations, row-level security definitions, and other visual analysis tools and processes to sets of evaluation data stored in the iFF Server 120 and iFF dashboard computer 140. The iFF dashboard computer 140 automatically takes the evaluation information gathered and sent by the survey application and displays it to users in an organized, meaningful, graphical format and allows users to filter results. For example, in
As shown in
As shown in
The dashboard reports can be customized to provide evaluators and students with up-to-date information, as well as trends over time periods of their choosing.
This application claims the benefit of priority of U.S. Provisional Application No. 62/395,714 filed on Sep. 16, 2016. This application incorporates by reference the entire contents of U.S. Provisional Application No. 62/395,714 filed on Sep. 16, 2016.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2017/052007 | 9/18/2017 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62395714 | Sep 2016 | US |