Aspects of the present invention relate to knowledge assessment and learning and to microprocessor and networked based testing and learning systems. Aspects of the present invention also relate to knowledge testing and learning methods, and more particularly, to methods and systems for Confidence-Based Assessment (“CBA”) and Confidence-Based Learning (“CBL”), in which a single answer from a learner generates two metrics with regard to the individual's confidence and correctness in his or her response.
Traditional multiple-choice testing techniques to assess the extent of a person's knowledge in a subject matter include varying numbers of possible choices that are selectable by one-dimensional or right/wrong (RW) answers. A typical multiple-choice test might include questions with three possible answers, where generally one of such answers can be eliminated by the learner as incorrect as a matter of first impression. This gives rise to a significant probability that a guess on the remaining answers could result in a response from the learner where they receive credit for an answer that they did not actually know, but simply guessed well, with no mechanism for the system to help the learner to actually learn the material. Under this situation, a successful guess would mask the true extent or the state of knowledge of the learner, as to whether he or she is informed (i.e., confident with a correct response), misinformed (i.e., confident in the response, which response, however, is not correct) or lacked information (i.e., the learner explicitly states that he or she does not know the correct answer, and is not allowed to respond in that fashion). Accordingly, the traditional multiple-choice one-dimensional testing technique is highly ineffectual as a means to measure the true extent of knowledge of the learner. Despite this significant drawback, the traditional one-dimensional, multiple-choice testing techniques are widely used by information-intensive and information-dependent organizations such as banking, insurance, utility companies, educational institutions and governmental agencies.
Traditional multiple-choice, one-dimensional (right/wrong), testing techniques are forced-choice tests. This format requires individuals to choose one answer, whether they know the correct answer or not. If there are three possible answers, random choice will result in a 33% chance of scoring a correct answer. One-dimensional scoring algorithms usually reward guessing. Typically, wrong answers are scored as zero points, so that there is no difference in scoring between not answering at all and taking an unsuccessful guess. Since guessing sometimes results in correct answers, it is always better to guess than not to guess. It is known that a small number of traditional testing methods provide a negative score for wrong answers, but usually the algorithm is designed such that eliminating at least one answer shifts the odds in favor of guessing. So for all practical purposes, guessing is still rewarded.
In addition, prior one-dimensional testing techniques encourage individuals to become skilled at eliminating possible wrong answers and making best-guess determinations at correct answers. If individuals can eliminate one possible answer as incorrect, the odds of picking a correct answer reach 50%. In the case where 70% is passing, individuals with good guessing skills are only 20% away from passing grades, even if they know almost nothing. Thus, the one-dimensional testing format and its scoring algorithm shift the purpose of individuals, their motivation, away from self-assessment and receiving accurate feedback, and toward inflating test scores to pass a threshold.
Aspects of the present invention provide a method and system for knowledge assessment and learning that accurately assesses the true extent of a learner's knowledge, and provides learning or educational materials remedially to the subject according to identified areas of deficiency. The invention incorporates the use of Confidence Based Assessments and Learning techniques and is deployable on a microprocessor based computing device or networked communication client-server system.
A services-oriented system for knowledge assessment and learning comprises a display device for displaying to a learner at a client terminal a plurality of multiple-choice questions and two-dimensional answers, an administration server adapted to administer one or more users of the system, a content management system server adapted to provide an interface for the one or more users to create and maintain a library of learning resources, a learning system server comprising a database of learning materials, wherein the plurality of multiple-choice questions and two-dimensional answers are stored in the database for selected delivery to the client terminal and a registration and data analytics server adapted to create and maintain registration information about the learners. The system performs a method of receiving a plurality of two-dimensional answers to the plurality of first multiple-choice question, determining, after a period of time, which of the answered multiple choice questions remain unfinished and which are completed, separating the unfinished questions from the completed questions, determining which of the unfinished and completed questions to include in a mastery-eligible list of questions, assigning a weight to each of the mastery-eligible questions based on the current learning state of the learner, a target learning score of the learner, and a calculated dopamine level of the learner.
The methods underlying the system have been purposely created such that the methods leverage key findings and applications of research related to learning and memory, with the intention of significantly increasing the efficiency and effectiveness of the learning process. Those methods are encapsulated in the various embodiments of the system.
Aspects of the present invention build upon the Confidence-Based Assessment (“CBA”) and Confidence-Based Learning (“CBL”) Systems and methods disclosed in U.S. patent application Ser. No. 13/216,017, U.S. patent application Ser. No. 13/029,045, U.S. patent application Ser. No. 12/908,303, U.S. patent application Ser. No. 10/398,625, U.S. patent application Ser. No. 11/187,606, and U.S. Pat. No. 6,921,268, all of which are incorporated into the present application by reference and all of which are owned by Knowledge Factor, Inc. of Boulder Colo.
The present description focuses on embodiments of the system pertaining to the system architecture, user interface, algorithm, and other modifications. At times other embodiments of the system are described to highlight specific similarities or differences, but those descriptions are not meant to be inclusive of all embodiments of the system as described in related prior patents and patent applications owned by Knowledge Factor.
As shown in
Any number of users may perform one function or fill one role only, while a single user may perform several functions or fill many roles. For example, an administrator 104 may also serve as a registrar 108 or analyst 110 (or other roles), or an author 106 may also serve as an analyst 110.
Groups of learner devices and administrator devices are connected to one or more network servers 204a-204c via the Internet or other network 206. Servers and associated software 208a-208c (including databases) are equipped with storage facilities 210a-210c to serve as a repository for user records and results. Information is transferred via the Internet using industry standards such as the Transmission Control Protocol/Internet Protocol (“TCP/IP”).
In one embodiment, the system 200 conforms to an industry standard distributed learning model. Integration protocols, such as Aviation Industry CBT Committee (AICC), Learning Tools Interoperability (LTI), and customized web services, are used for sharing courseware objects across systems.
Embodiments and aspects of the present invention provide a method and system for conducting knowledge assessment and learning. Various embodiments incorporate the use of confidence based assessment and learning techniques deployable on a micro-processor-based or networked communication client-server system, which gathers and uses knowledge-based and confidence-based information from a learner to create continually adaptive, personalized learning plans for each learner. In a general sense the assessments incorporate non-one-dimensional testing techniques.
In accordance with another aspect, the present invention comprises a robust method and system for Confidence-Based Assessment (“CBA”) and Confidence-Based Learning (“CBL”), in which one answer generates two metrics with regard to the individual's confidence and correctness in his or her response to facilitate an approach for immediate remediation. This is accomplished through various tools including, but not limited to:
1. An assessment and scoring format that eliminates the need to guess at answers. This results in a more accurate evaluation of “actual” information quality.
2. A scoring method that more accurately reveals what a person: (1) accurately knows; (2) partially knows; (3) doesn't know; and (4) is sure that they know, but is actually incorrect.
3. An adaptive and personalized knowledge profile that focuses only on those areas that truly require instructional or reeducation attention. This eliminates wasted time and effort training in areas where attention really isn't required.
4. A declarative motivational format that assesses the learner's goals and experience with the subject matter. For example, a user who is preparing for a high-stakes test has an intrinsic motivation that is different than somebody completing a corporate-required compliance training. This scoring method may take the form of identifying the date by which the information, optionally as part of a larger curriculum, must be mastered.
5. Timing tools and techniques that identify whether or not the user is randomly guessing or simply trying to “game” the system in order to complete a module as quickly as possible.
6. A scoring method that is further enhanced, by the declarative motivation, timing, and confidence metrics above, to more accurately identify learners who are interested in completing a learning outcome without actual mastery of the material
7. An adaptive and personalized knowledge potentiation schedule that prescribes the optimal time(s) for a learner to refresh a previously taken module in order to extend the time that the information is truly mastered, while minimizing the ongoing required study time to achieve such mastery.
In learning modules, the foregoing methods and tools are implemented by the a method or “learning cycle” such as the following:
1. The learner is asked to complete a declarative motivational and expertise assessment. This begins with a set of questions around the dates by which the knowledge must be mastered, the amount of time the learner is willing to dedicated to studying, the goals of the modules (long-term knowledge transfer or transactional “accomplishment” modules), and the learner's impression of their own expertise or pre-existing knowledge about the subject matter.
2. In some embodiments, the learner's declarative motivation and expertise may be further enhanced by opting-in to certain game features that may make the subject matter more challenging by adding points, levels, leaderboards, timing restrictions on responses and viewing explanations, and other game mechanics.
In some embodiments, the aforementioned motivational and expertise assessment can be over-ridden by the instructor or content curator. In this case, the learner is asked to complete a formative assessment. This begins with the step of compiling a standard three to five answer multiple-choice test into a structured CBA format with possible answers for each question that cover three states of mind: confidence, doubt, and ignorance, thereby more closely matching the state of mind of the learner.
3. Review the personalized knowledge profile, which is a summary of the learner's responses to the initial assessment relative to the correct responses. The Confidence Based (CB) scoring algorithm is implemented in such a way that it teaches the learner that guessing is penalized, and that it is better to admit doubts and ignorance than to feign confidence. The CB set of answers are then compiled and displayed as a personalized knowledge profile to more precisely segment answers into meaningful regions of knowledge, giving individuals and organizations rich feedback as to the areas and degrees of mistakes (misinformation), unknowns, doubts and mastery. The personalized knowledge profile is a much better metric of performance and competence. For example, in the context of the corporate training environment, the individualized learning environment encourages better-informed employees that retain higher information quality and, thereby reduce costly knowledge and information errors, and increase productivity. Progress indicators are provided to the learner to reinforce the concept that learning is a journey that doesn't begin with perfect knowledge, but begins with an accurate self-assessment of knowledge.
4. Review the question, response, correct answer, and explanation in regard to the learning material. Ideally, explanations for both correct and incorrect answers are provided (at the discretion of the author).
5. Review the Additional Learning (in some embodiments described as “Expand Your Knowledge”) learning materials to gain a more detailed understanding of the subject matter (breadth and depth).
6. Iteration—The process can be repeated as many times as required by the individual learner in order to demonstrate an appropriate understanding of, and confidence in, the subject matter. In some embodiments, and as part of this iterative model, answers scored as confident and correct (depending on which algorithm is used) can be removed from the list of questions presented to the learner so that the learner can focus on his/her specific skill gap(s). During each iteration, the number of questions presented to the learner can be represented by a subset of all questions in a module; this is configurable by the author of the module. In addition, the questions, and the answers to each question, are presented in random order during each iteration through the use of a random number generator invoked within the software code that makes up the system.
In some embodiments of the system, the random algorithm is replaced with a more deterministic algorithm that uses a statistically dominant “path to mastery” to facilitate the most effective route of educating a learner with a particular knowledge profile. In some embodiments of the system, the iteration size will vary based on calculation of the learner's working memory, which will be derived from their success patterns in immediately previous and historical iterations. See
In some embodiments of the system, the questions delivered to a learned from within an author's defined module may be supplemented with more difficult or simpler questions with the same subject matter (or a previously experienced subject matter by the learner) in order to further refine the calculation of a learner's working memory.
In accordance with one aspect, the invention produces a personalized knowledge profile, which includes a formative and summative evaluation for the learner and identifies various knowledge quality levels. Based on such information, the system correlates, through one or more algorithms, the user's knowledge profile to a database of learning materials, which is then communicated to the system user or learner for review and/or reeducation of the substantive response.
Aspects of the present invention are adaptable for deployment on a stand-alone personal computer system. In addition, they are also deployable on a computer network environment such as the World Wide Web, or an intranet or mobile network client-server system, in which, the “client” is generally represented by a computing device adapted to access the shared network resources provided by another computing device, the server. See for example the network environments described in conjunction with
With reference to
The various tasks of the knowledge assessment and learning system are supported by web services-based network architecture and software solution.
The System Administration module 302 includes such components as a login function 310, single sign-on function 312, a system administration application 314, an account service module 316 and an account database structure 318. The System Administration module 302 functions to administer the various customer accounts present in the application.
The CMS module 304 includes an authoring application 322 that provides content authoring functionality to author and structure the learning elements and curriculum, a module review function 324, an import/export function 320 that allows for xml or another form-based data import, an authoring service 326, a published content service 328, an authoring database 330 and a published content database 332. The CMS module 304 allows for curriculum functionality to manage the various elements that make up the curriculum and publishing functionality to formally publish the learning content so that it is available to end-users. The CMS module also allows for content to be assigned an initial level of difficulty, a taxonomic level of learning objective (e.g. Bloom's), tags to define the relatedness of one set of content to another, and tags to identify the subject matter.
The Learning module 306 includes a learner portal 336, a learning applications function 334 and a learning service function 338. Also included is a learning database 340. Learning and assessment functionality leverages one or more of the other aspects and features described herein.
The Registration and Data Analytics (RDA) 308 includes a registration application 342, an instructor dashboard 344 and a reporting application 346, a registration service 348, a reporting service 350, a registration database 352 and a data warehouse database 354. The Registration and Data Analytics 308 includes functionality to administer registration of the various end-user types in the particular application and functionality to display relevant reports to end-users in a context dependent manner based on the role of the user.
In operation, any remotely located user may communicate via a device with the system (e.g.
Each application includes a user login capability, incorporating necessary security processes for system access and user authentication. The login process prompts the system to effect authentication of the user's identity and authorized access level, as is generally done in the art.
Referring again to
Authoring further provides editorial and formatting support facilities in a What You See Is What You Get (WYSIWYG) editing window that creates Hypertext Mark-Up Language (“HTML”) and other browser/software language for display by the system to various user types. In addition, authoring provides hyperlink support and the ability to include and manage multiple media types common to web-based applications.
In another embodiment of the authoring environment, content can be entered in a simpler format, such as Markdown, and can be further annotated by additional extensions specific to the authoring application.
Authoring is adapted to also allow the user to upload a text-formatted file, such as xml or csv, for use in importing an entire block of content or portion thereof using bulk upload functionality. In addition, authoring is also adapted to receive and utilize media files in various commonly used formats such as *.GIF, * JPEG, *.MPG, *.FLV and *.PDF (this is a partial list of supported file types). This feature is advantageous in the case where learning or assessment requires an audio, visual and/or multi-media cue. In addition, authoring is also adapted to retain a link to various positions within the original source file, so that the learner can refer to the exact place in the source text where the explanation and additional learning are contained.
The authoring application 322 allows authors to use existing learning materials or create new learning materials in the appropriate format. Authoring is accomplished by creating learning objects in the authoring application, or uploading new learning objects through the bulk upload feature, and then combining selected learning objects into learning or assessment modules. Learning objects in the system are comprised of the following:
Each question must have a designated answer as the correct choice, and the other two to four answers are identified as being incorrect or misinformed responses, and which are generally constructed as plausible distractors or commonly held misinformation. In the learning example as shown in
Learning objects are organized into modules, and it is these modules that are assigned to learners. The learning objects within each module are then displayed to the learner based on the scoring and display algorithm in the learning application.
In another embodiment of the system, Learning Objects are categorized by set (as part of a curriculum, the most common form would be a Chapter) and a learner is presented with a dynamically generated module based on the instructor- or learner-indicated level of desired difficulty or time that the assignment should consist of.
Once a learning or assessment module has been created using the authoring application, the module is published in preparation for presentation to learners via the learning application. The learning application then configures the one-dimensional right-wrong answers into the non-one dimensional answer format. Thus, in one embodiment of the present invention in which a query has multiple possible answers, a non-one-dimensional test, in the form of a two-dimensional response, is configured according to predefined confidence categories or levels.
Three levels of confidence categories are provided to the learner, which are designated as: 100% sure (learner selects only one answer and categorizes that response as “I Am Sure”; see e.g.
In another embodiment of the system, the level of confidence is more granular; specifically 100% sure for one answer, 75% partially sure for one answer, 50% partially sure for each of two answers, and 0% sure for “I don't know yet”.
In another embodiment of the system, the level of confidence can be specified by the learner in a range from 0% to 100% for each of the possible answers (with the total summing to 100), exploiting a risk/reward trigger when a certain amount of points or other scoring mechanism is deducted for each incorrect response.
As seen from the above discussion, the system substantially facilitates the construction of non-one-dimensional queries or the conversion of traditional one-dimensional queries into multi-dimensional queries. The authoring functions of the present invention are “blind” to the nature of the materials from which the learning objects are constructed. For each learning object, the system acts upon the form of the test query and the answer choice selected by the learner. The algorithms built into the system control the type of feedback that is provided to the learner, and also control the display of subsequent learning materials that are provided to the learner based on learner responses to previous queries.
The CMS allows an author to associate each query with specific learning materials or information pertaining to that query in the form of explanations or Additional Learning. The learning materials are stored by the system, providing ready access for use in existing or new learning objects. These learning materials include text, animations, images, audio, video, web pages, and similar sources of training materials. These content elements (e.g., images, audio, video, PDF documents, etc.) can be stored in the system, or on separate systems and be associated with the learning objects using standard HTML and web services protocols.
The system enables the training organization to deliver learning and/or assessment modules. The same learning objects can be used in both (or either) learning and assessment modules. Assessment modules utilize the following elements of the learning objects in the system:
Each learning module is displayed to the learner as two separate, repeated segments. First, the learner is presented with a formative assessment that is used to identify relevant knowledge and confidence gaps manifest by the learner. After the learner completes the formative assessment, then the learner is given an opportunity to fill knowledge gaps through review of explanations and Additional Learning information. The learner continues to be presented with rounds of formative assessment and then review until he/she has demonstrated mastery (confident and correct responses) for the required percentage of learning objects in the module. These rounds may be lengthened or shortened based on the working memory capacity of the learner as calculated in previous or current learning interactions.
The author (and other roles related to curriculum management that will be presented later in this document) can set the following scoring options in learning modules:
In each round of learning, the learning objects are presented to the learner in random order (or in a pre-defined order as set by the Author, or in an order designed to identify the learner's expertise and working memory capacity), and the potential answers to each question are also presented in random order each time that the question is presented to the learner. Which learning objects are displayed in each round (or question set) is dependent on (a) the scoring options listed above, and (b) the algorithms built into the Learning application. The algorithms are described in more detail later in this document. Assessment modules are structured such that all learning objects in the module are presented in a single round, which may be shortened or lengthened depending on the adaptive assessment algorithm.
In accordance with one embodiment, the author (and other roles related to curriculum management that will be presented later in this document) can set the following scoring options in assessment modules: Whether questions in the assessment module will be presented to the learner in random order, in an order defined by the author, or in a manner to as quickly as possible determine the actual knowledge of the learner as it relates to the content in the module or curricula part.
Presentation of the learning and assessment modules to the learner is initiated by first publishing the desired modules from within the authoring application (or CMS). Once the modules are published in the CMS, the learning application is then able to access the modules. Learners then must be registered for the modules in the Registration and Data Analytics application that is part of the system, or in Learning Management Systems or portals operated by customers and which have been integrated with the system.
As an example of one embodiment, the queries or questions would consist of three answer choices and a two-dimensional answering pattern that includes the learner's response and his or her confidence category in that choice. The confidence categories are: “I am sure,” “I am partially sure,” and “I don't know yet.” Another embodiment of the system allows an author to configure the system such that a query without any response is deemed as, and defaults to, the “I don't know yet” choice. In other embodiments, the “I don't know yet” choice is replaced with an “I am not sure” or “I don't know” choice. In other embodiments, up to five answer choices may be provided to the learner.
In other embodiments, the confidence categories would be replaced with a range of confidence from 0% to 100% expressed for each of the possible answers (with the total summing to 100), exploiting a risk/reward trigger when a certain amount of points or other scoring mechanism is deducted for each incorrect response.
Learning and/or assessment modules can be administered to separate learners at different geographical locations and at different time periods. In one embodiment of the system, relevant components of the learning objects associated with the learning and/or assessment modules are presented in real-time, and in accordance with the algorithm, between the server and a learner's device, and progress is communicated to the learner as he/she proceeds through the module. In another embodiment of the system, the learning and/or assessment modules can be downloaded in bulk to a learner's device, where the queries are answered in their entirety, explanations and Additional Learning can be reviewed, and real-time progress is provided to the learner, before the responses are communicated (uploaded) to the system.
The system captures numerous time measurements associated with learning or assessment. For example, the system measures the amount of time that was required for the subject to respond to any or all of the test queries presented. The system also tracks how much time was required to review explanation materials and Additional Learning information. When so adapted, the time measuring script or subroutine functions as a time marker. In some embodiments of the present invention, the electronics time marker also identifies the time for the transmission of the test query by the courseware server to the learner, as well as the time required for a response to the answer to be returned to the server by the learner. In some embodiments, the system uses the time as an indicator of question difficulty or complexity, and in some embodiments the system uses the ratio of the time spent answering a question and the time spent reading the explanation as an indicator of “churn”—that the user is simply trying to get through the material as fast as possible without attempting to actually learn the material.
In one embodiment of the system, if numerous questions are answered too quickly and incorrectly, the system will prompt the learner to reassess their motivation and determine, for example, if they are simply “evaluating” the system, or are actually interested in knowledge transfer. The learner's response may be used in further testing or questioning.
Various user interface embodiments are contemplated and are described. For example, learner answers may be selected on a user interface screen and dragged into an appropriate response area such as “confident”, “doubtful”, and “not sure” (e.g.
In the following discussion certain terms of art are used for ease of reference but it is not the intention here to limit the scope of these terms in any way other than as set forth in the claims.
ampObject: Refers to an individual question/answer presented to a learner or other user of the assessment and learning system (including introductory material), the learning information that is displayed to the learner (explanations and Additional Learning), and metadata associated with each ampObject that is available to the author and analyst. This ampObject structure was previously referred to in this document as a “learning object”.
Module: Refers to a group of ampObjects (learning objects in the system) that are presented to a learner in any given learning and/or assessment situation. Modules are either created by the content author, or can be dynamically created by the learner or instructor as part of a curriculum. Modules are the smallest curriculum element that can be assigned to or created for a learner.
To build, develop or otherwise compile a learning or assessment module in a CB format entails converting a standard assessment format (e.g., multiple-choice, true-false, fill-in-the-blank, etc.) into questions answerable by simultaneously providing a response as to the correctness of the answer (i.e., knowledge) and the learner's degree of certainty in that response (i.e., confidence).
Examples of two different implementations of the user interface for the assessment portion of the CBA or CBL environment are provided in
In the example of
A learner's confidence is highly correlated with knowledge retention. As stated above, certain aspects ask and measure a learner's level of confidence. Further aspects of the present invention move further by requiring learners to demonstrate full confidence in their answers in order to reach true knowledge, thereby increasing knowledge retention. This is accomplished in part by an iteration step (Adaptive Repetition™). After individuals review the results of the material in the system as above, learners can retake the assessment as many times as necessary to reach mastery as demonstrated by being both confident and correct in that knowledge. Learning in accordance with this adaptively repetitive methodology in combination with non-one-dimensional assessment yields multiple personalized knowledge profiles, which allows individuals to understand and measure their improvement throughout the assessment process.
In one embodiment, when an individual retakes the formative assessment in a learning module, the questions are randomized, such that individuals do not see the same questions in the same order from the previous assessment. Questions are developed in a database in which there is a certain set of questions to cover a competency or set of competencies. To provide true knowledge acquisition and confidence of the subject matter (mastery), a certain number of questions are presented each time rather than the full bank of questions (spacing or chunking). Research demonstrates that such spacing significantly improves long-term retention.
Display of ampObjects (Questions) to Learners:
In some embodiments, questions (in ampObjects) are displayed to the learner in their entirety (all questions at once in a list) and the user also answers the questions in their entirety. In another embodiment, the questions are displayed one at a time. In accordance with further embodiments, learning is enhanced by an overall randomization of the way questions are displayed to a learner, and the number and timing of the display of ampObjects to the learner. Broadly speaking, the selected grouping of questions allows the system to better tailor the learning environment to a particular scenario. As set forth above, in some embodiments the questions and groups of questions are referred as ampObjects and modules, respectively. In one embodiment, the author may configure whether the ampObjects are “chunked” or otherwise grouped so that only a portion of the total ampObjects in a given module are presented in any given round of learning. The ampObjects may also be presented in a randomized, sequential or partially deterministic order to the user in each round or iteration of learning. The author of the learning system may select that answers within a given ampObject are always displayed in random order during each round of learning.
The randomized and deterministic order of question presentation may be incorporated into both the learning and assessment portions of the learning environment. In one embodiment, during the formative assessment portion of learning the questions and answers are displayed only in a random order during each question set of learning. In another embodiment, the assessment is delivered in an adaptive manner, with question difficulty increasing as learners get more answers correct, and question difficulty decreasing if learners continue to struggle with questions. Various other schemes can be applied to the order that learning objects are displayed to the user. For example, one type of “standard assessment” may require that the ampObjects be displayed in either random or sequential order during one assessment, or that they be displayed only as either sequential or random. One type of “adaptive assessment” requires that the ampObjects be displayed in an order that most quickly identifies the learner's areas of strengths and weaknesses relative to the curriculum being served. In the “switches” section below, further details are shown that allow an author to “dial up” or dial down” the mastery level of the assessment.
Aspects here will use a weighing system to determine the probability of a question being displayed in any given round or set based on how the ampObject was previously answered. In one embodiment, there is a higher probability that a particular question will be displayed if it was answered incorrectly (confident and incorrect, or partially sure and incorrect) in a previous round.
In addition, certain aspects use a weighting system and a series of triggers to manage a learner's dopamine response and tailor the questioning based on this response. Dopamine is a neurotransmitter responsible for reward-motivated behavior, which within a learning system should be managed to constantly present the learner with the proper balance between risk and reward, as a function of an individual's motivation level. For example, a learner with high motivation would take on harder tasks (more difficult questions, less time offered in review, longer rounds, etc.) if they perceived the reward (points, badges, faster completion time) to be great. A less motivated learner would need smaller rounds, perhaps easier questions, and more intermediate rewards to ensure the proper dopamine levels where learning would happen and memories were more likely to be potentiated.
See, for example, http://www.jneurosci.org/content/32/18/6170.abstract for additional background relating to dopamine response research and results.
With continuing reference to
With continuing reference to
If the module has satisfied the completion criteria as specified above at 764, the module is marked as complete at 765. Otherwise it creates a list of all of the remaining questions in the module and marks it as the ELIGIBLE question list at 766. A new container, the SELECTED list, is then initiated at 767. The next target round size 768 is then calculated based on one or more of the following criteria.
Preferably the round size will never exceed the maximum round size set by the algorithm administrator and the round size will never be less than the minimum round size set by the algorithm administrator. If the number of eligible questions exceeds the calculated size of the next round, the capacity of the SELECTED list is set to the target round size. Otherwise it is set as the size of the ELIGIBLE list 769.
The questions in the ELIGIBLE list are then weighted at 777 based on the likelihood that the learner will get each question correct at 778. For example, optimal dopamine release occurs when there is reasonable balance between success and struggle. The optimal number of questions that a learner should choose as correct in a question round of 8 may be a range of 3 to 6 depending on the learner and their score in a previous round. Therefore the algorithm would want to serve questions that were likely to achieve the correct level of dopamine in the learner in the next subsequent round. The questions in the ELIGIBLE list are weighted on one or more of the following criteria:
Learner's Previous Response to This Question (Path To Mastery—P2M—Question State)—If the question is difficult, but the learner has already seen it, and answered proficiency (Correct 1×) on the most previous round, the question is weighted higher, as there is a higher likelihood that the learner will get the question correct in this round. This is explained in further detail in connection with step 783 below.
For example, a learner who says they are a novice, and/or their previous round scores have indicated that they are relatively unfamiliar with this subject matter, would result in the system weighting certain “easier” questions higher, so that they would be more likely to be shown in this round, up to the appropriate dopamine-driven distribution of likely correct and likely incorrect responses, a more evenly distributed set of questions. If the Target Learner Score for that round was 50% for a round of 8 questions, the system may weight extremely easy questions very high (80-99, on a scale of 1-100), moderately easy questions with a weight between (60-79), and extremely difficult questions with a weight of (1-30). These weights will later be used at 782 as an assignment to each round. At 779 a loop begins to determine if the number of questions for the round have been satisfied. At 782 a calculation is implemented. The P2M weight, explained in connection with
For example, in this step, for a target round score of 60% correct, the learner may see a distribution similar to the following:
Aspects relating to the implementation of the knowledge assessment and testing system invoke various novel algorithms to evaluate and score a particular testing environment.
With reference first to
With reference to
With reference to
With reference to
In the example of
The Path to Mastery for a learner can be considered and described as ranging from Exposure (The learner has seen the Question), through Familiarity (The learner is shown the Correct Answer), to Proficiency (Learner was Correct 1×), and eventually to mastery (Learner was Correct 2× in a row). Early stages between Exposure and Familiarity may include correcting certain misinformation and doubt, which for some complex questions may require an additional corrections between Proficiency and Mastery.
In order to potentiate any new learning, including the correction of misinformation or doubt, the timing between these stages needs to be adjusted to take advantage of certain temporal effects. For example, a learner who is confidently wrong about something would be more easily corrected sooner rather than later via the hypercorrection effect, and should therefore be required to answer that question shortly after exposure to the correct answer (to proficiency), but then delayed slightly before the next response to ensure that the misinformation was indeed corrected, and the learner didn't just respond to get through the immediate question round.
The P2M Weighting also has a parameter defined by the algorithm administrator to control the degree of interleaving (see below). This parameter will weigh questions that are more closely related topically (e.g. many related classifications or tags), and can skew how soon these related questions will be seen relative to each other in the round selection algorithm.
If a learner is doubtful (they have more than one possible answer, or is less than 100% confident in a single answer, yet still wrong), they are uninformed, and the hypercorrection effect would not be responsible for clarifying their understanding. Instead they need to see the correct information in the appropriate context, relatively soon after answering the question, but it can be interleaved with other information they are learning.
If a learner is not sure (they do not know the answer to the question), the likelihood that they see the same question again is weighted according to the rest of the algorithm; there is no imperative to show that same question sooner or later than other question similarly answered.
If the learner was correct, but still had doubt, they can be shown that question later, as seeing other questions where there is misinformation is more important, and may help reaffirm their existing understandings as they see the correct and incorrect answers to related questions.
If the learner was confident and correct, the learner is assumed to be proficient at this question, and showing the same question much later would be desirable, as it decreases the likelihood that they would remember something they simply guessed, and indeed would demonstrate that they understood the information.
The state table in
For example, Question A may ask the learner if they know which U.S. territory was the last one to become a state, and Question B may ask how many U.S. states there are. If the learner knows that the answer to Question A is “Hawaii”, an analysis of the question answer history across all learners may show that learners who knew the answer to Question A got the correct answer for Question B 99% of the time. This would indicate a Question Correlation Index of 0.99 between Question A and B (in one direction only).
If a learner was demonstrating a high degree of success on answering their questions (consistently exceeding the target score), after reaching proficiency (1× correct) on a particular question (Question C), if they reached mastery (2× correct) in a question (Question D), and the Question Correlation Index between Question D to C was high, Question C would automatically be marked as MASTERED (or SATISFIED), as the likelihood that the learner had in fact mastered the information in Question C would be statistically significant.
In some instances, if the Question Correlation Index is high—the likelihood that the learner will know the answer to a particular question if they knew the answer to a related question, and the instructor and/or content curator has allowed it, some questions may be dropped out of the required questions after only one instance of confident and correct.
In each of the embodiments discussed above, an algorithm is implemented that performs the following general steps:
Identification of a goal state configuration: The author of a given knowledge assessment may define various goal states within the system in order to arrive at a customized knowledge profile and to determine whether a particular ampObject (e.g., question) is deemed as being complete. The following are additional examples of these goal states as embodied by the algorithmic flow charts described above and in conjunction with
Categorizing learner progress: Certain aspects of the system are adapted to categorize the learner's progress against each question (ampObject) in each round of learning, relative to the goal state (described above) using similar categorization structures as described herein, e.g. “confident+correct”, “confident+incorrect”, doubt+correct”, “doubt+incorrect” and “not sure.”
Subsequent Display of ampObjects: The display of an ampObject in a future round of learning is dependent of the categorization of the last response to the question in that ampObject relative to the goal state. For example, a “confident+incorrect” response has the highest likelihood that it will be displayed in the next round of learning.
The algorithm or scoring engine creates a comparison of the learner's responses to the correct answer. In some embodiments of the invention, a scoring protocol is adopted, by which the learner's responses or answers are compiled using a predefined weighted scoring scheme. This weighted scoring protocol assigns predefined point scores to the learner for correct responses that are associated with an indication of a high confidence level by the learner. Such point scores are referred herein as true knowledge points, which would reflect the extent of the learner's true knowledge in the subject matter of the test query. Conversely, the scoring protocol assigns negative point scores or penalties to the learner for incorrect responses that are associated with an indication of a high confidence level. The negative point score or penalty has a predetermined value that is significantly greater than knowledge points for the same test query. Such penalties are referred herein as misinformation points, which would indicate that the learner is misinformed of the matter. The point scores are used to calculate the learner's raw score, as well as various other performance indices. U.S. Pat. No. 6,921,268, issued on Jul. 26, 2005 provides an in-depth review of these performance indices and the details contained therein are incorporated by reference into the present application.
Documenting the Knowledge Profile:
The primary goal of the knowledge profile is to provide the learner with continuous feedback regarding his/her progress in each module. Embodiments of the system use various manifestations of the knowledge profile. However, the following timing is generally used to display the knowledge profile to the learner:
Learning Modules:
Assessment Modules:
One embodiment also provides in the upper right corner of the Learning application (in the form of a small pie chart) a summary of the learner's progress for that module (
Another embodiment provides a graph including 4 primary quadrants and one secondary quadrant indicating a path to mastery from “I don't know” to “Uninformed” to “Mastery”, either directly or through Doubt or Misinformed. See for example
One embodiment also displays to the learner, after each response to an assessment (in both learning and assessment modules), whether his/her answer is confident+correct, partially sure+correct, unsure, confident+incorrect, or partially sure+incorrect. However, the correct answer is not provided at that time. Rather, the goal is to heighten the anticipation of the learner in any particular response so that he/she will be eager to view the correct answer and explanation in the learning phase of any given round.
In most embodiments, the documented knowledge profile is based on one or more of the following pieces of information: 1) The configured goal state of the module (e.g. mastery versus proficiency) as set by the author or registrar; 2) the results of the learner's formative assessment in each round of learning, or within a given assessment; and 3) how the learner's responses are scored by the particular algorithm being implemented. As needed or desired, the knowledge profile may be made available to the learner and other users. Again, this function is something that may be selectively implemented by the author or other administrator of the system.
Other embodiments have displayed a simple list of response percentages separated by categories of responses, or the cumulative scores across all responses based on the scores assigned to each response.
In one embodiment, during the assessment phase of each round of learning the following data is continuously displayed and updated as the learner responds to each question: (a) The number of questions in that Question Set (which is determined by the author or registrar); which question from that question set is currently being displayed to the learner (1 of 6; 2 of 6; etc.); (b) which question set is currently being displayed to the learner (e.g., “Question Set 3”); (c) the total number of questions (ampObjects) in the module; and (d) the number of ampObjects that have been completed (1× Correct scoring) or mastered (2× Correct scoring).
The number of question sets in a module is dependent on: (a) The number of ampObjects in a module, (b) the number of ampObjects displayed per question set, (c) the scoring (1× Correct or 2× Correct), (d) the percentage required for ‘passing’ a particular module (default is 100%), (e) and the number of times a learner must respond to an ampObject before he/she completes (1× Correct) or masters (2× Correct) each ampObject.
In one embodiment, during the learning phase of each question set, the following may be continuously displayed as the learner reviews the questions, answers, explanations and Additional Learning elements for each ampObject: (a) The total number of questions (ampObjects) in the module; (b) the number of questions completed (1× Correct) or mastered (2× Correct); (c) a progress summary graph, such as a pie chart showing the number of confident and correct responses at that point in time; and (d) a detailed progress window providing real-time information regarding how the responses have been categorized.
In another embodiment of the system, during the learning or assessment phase, the following may be continuously displayed as the learner reviews questions: (a) time spent in the question and module, (b) time remaining to complete the question or module, or suggested time to complete the question or the module if the learner has opted into this game mechanic or practice test option, (c) points or score user has amassed in this iteration of learning.
In the current embodiment of the system, in an assessment module (i.e., where only the assessment, and no learning, is displayed to the learner) learner progress is displayed to the learner as follows: (a) The total number of questions in that module; and (b) which question from that module is currently being displayed to the learner (1 of 25; 2 of 25; etc.). In assessment modules all questions in that module are presented to the learner in one round of assessment. There is no parsing of ampObjects into questions sets, as questions sets are not pertinent to assessments.
Upon completion of the assessment module, the learner is provided with a page summarizing one or more of the following:
System Roles:
In further embodiments, in addition to the system roles stated above (Administrator, Author, Registrar, Analyst, and Learner) there are additional roles that attend to detailed tasks or functions within to the five overall roles. These additional roles include:
In other embodiments, the system roles may be grouped by the overall system component, such as within the Content Management System (CMS) or Registration and Data Analytics (RDA).
In one embodiment one or more of the following steps are utilized in the execution of a learning module. One or more of the steps set forth below may be effected in any order:
Similar functional steps are used in the execution of an assessment module. However, for assessment modules, no learning phase is present, and ampObjects (only the introduction, question, answers) are presented in one contiguous grouping to the learner (not in question sets).
Authoring of learning objects (ampObjects) may include pre-planning and the addition of categorical data to each learning object (e.g., learning outcome statement; topic; sub-topic; etc.). In addition, ampObjects may be aggregated into modules, and modules organized into higher order containers (e.g., courses, programs, lessons, curricula). The CMS may also be adapted to conduct quality assurance review of a curriculum, and publish a curriculum for learning or assessment.
Within the Registration and Data Analytics (RDA) application
The ability to enroll a learner in a curriculum, and allow the learner to engage in an assessment and/or learning as found in the curriculum. In addition to the feedback provided directly to the learner in the Learning application (as described above), reports associated with learning and/or assessment may also be accessed in the RDA by particular roles (e.g., analyst, instructor, administrator).
In accordance with another aspect, reports can be generated from the knowledge profile data for display in varied modalities to learners or instructors. Specifically, in the RDA reports can be accomplished through a simple user interface within a graphical reporting and analysis tool that, for example, allows a user to drill down into selected information within a particular element in the report. Specialty reporting dashboards may be provided such as those adapted specifically for instructors or analysts. Reports can be made available in formats such as .pdf, .csv, or many other broadly recognized data file formats.
As described above, the system described herein may be implemented in a variety of stand-alone or networked architectures, including the use of various database and user interface structures. The computer structures described herein may be utilized for both the development and delivery of assessments and learning materials, and may function in a variety of modalities including a stand-alone system or network distributed, such as via the World Wide Web (Internet), intranets, mobile networks, or other network distributed architectures. In addition, other embodiments include the use of multiple computing platforms and computer devices, or delivered as a stand-alone application on a computing device with, or without, interaction with the client-server components of the system.
In one specific user interface embodiment, answers are selected by dragging the answer to the appropriate response area. These may be comprised of a “confident” response area, indicating that the learner is very confident in his/her answer selection; a “doubtful” response area, indicating that the learner is only partially certain of his/her answer selection; and a “not sure” response area, indicating that the learner is not willing to commit that he/she knows the correct answer with any level of certainty. Various terms may also be used to indicate the degree of confidence, and the examples of “confident”, “doubtful”, and “not sure” indicated above are only representative. For example, “I am sure” for highly confident, “I am partially sure” for a doubtful state, and “I don't know yet” for a not sure state. In one embodiment representing an assessment program, only a single “I Am Partially Sure” response box may be provided; i.e., the learner can select only one answer within a “partially sure” response.
In accordance with another aspect, the author of a learning module can configure whether or not the ampObjects are chunked or otherwise grouped so that only a portion of the total ampObjects in a given module are presented in any given round of learning. All “chunking” or grouping is determined by the author through a module configuration step. The author can chunk learning objects at two different levels in a module, for example, by the number of learning objects (ampObjects) included in each module, and by the number of learning objects displayed per question set within a learning event. In this embodiment completed ampObjects are removed based on the assigned definition of “completed.” For example, completed may differ between once (1×) correct and twice (2×) correct depending of the goal settings assigned by the author or administrator. In certain embodiments, the author can configure whether or not the learning objects are ‘chunked’ so that only a portion of the total learning objects in a given module are presented in any given question set of learning. Real-time analytics can also be used to optimize the number of learning objects displayed per question set of learning.
ampObject Structure
ampObjects as described herein are designed as “reusable learning objects” that manifest one or more of the following overall characteristics: A learning outcome statement (or competency statement or learning objective); learning required to achieve that competency; and an assessment to validate achievement of that competency. As described previously for learning objects, the basic components of an ampObject include: an introduction; a question, the answers (1 correct answer, and 2-4 incorrect answers), an explanation (the need to know information); an optional “Additional Learning” information (the nice to know information); metadata (such as the learning outcome statement, topic, sub-topic, key words, and other hierarchical or non-hierarchical information associated with each ampObject); and author notes. Through reporting capabilities in the system, the author has the capability to link a particular metadata element to the assessment and learning attributable to each ampObject, which has significant benefits to downstream analysis. Using a Content Management System (“CMS”), these learning objects (ampObjects) can be rapidly re-used in current or revised form in the development of learning modules and curricula.
In another embodiment, shadow questions may be utilized that are associated with the same competency (learning outcome; learning objective). In one embodiment, the author associates relevant learning objects into a shadow question grouping. If a learner receives a correct score for one question that is part of a shadow question group, then any learning object in that shadow question is deemed as having been answered correctly. The system will pull randomly (without replacement) from all the learning objects in a shadow group as directed by one or more of the algorithms described herein. For example, in a module set up with 1× Correct algorithm, the following procedure may be implemented:
In the above scenario, that shadow question group is considered mastered, and no additional learning objects from that shadow question group will be displayed to the learner.
The system can create a map of highly correlated questions, whereby learner answer history is used to show the likelihood that if a learner knows the answer to Question #1 (Q1), they also likely know the answer to Question #2 (Q2). Authors, Content Curators and Instructors can use this Question Correlation Index (QCI) to review the related questions, determine if their QCI is valid and can be used in removing questions from the question set in an adaptive learning configuration.
Modules serve as the “container” for the ampObjects as delivered to the user or learner, and are therefore the smallest available organized unit of curriculum that a learner will be presented with or otherwise experience in the form of an assignment. As noted above, each module preferably contains one or more ampObjects. In one embodiment it is the module that is configured according to the algorithm. A module can be configured as follows:
Dynamic Modules are containers for a larger set of ampObjects that can be created on-demand by instructors and learners, and are not rigidly defined the original content author. Dynamic Modules may be those created based on keywords, intended duration of assignment, or the other meta-data associated with individual ampObjects.
While the curriculum structure may be open-ended in certain embodiments, the author or administrator has the ability to control the structure regarding how the curriculum is delivered to the learner. For example, the modules and other organizational units (e.g., program, course or lesson) may be renamed or otherwise modified and restructured. In addition, modules can be configured such that it is displayed to the learner as a stand-alone assessment (summative assessment), or as a learning module that incorporates both the formative assessment and learning capabilities of the system.
As a component of the systems described herein, a learner dashboard is provided that displays and organizes various aspects of information for the user to access and review. For example, a user dashboard may include one or more of the following:
This includes in one embodiment a list of current assignments with one or more of the following status states (documenting the completion state for that module by the student or reviewer): Start assignment, Continue Assignment, Review, Start Refresher, Continue Refresher, Review Content (reviewer only). Also included in the My Assignments page is curriculum information, such as general background information about the aspects of the current program (e.g., a summary or overview of a particular module), and the hierarchy or organization of the curriculum. The assignments page may also include pre- and post-requisite lists such as other modules or curricula that may need to be taken prior to being allowed to access a particular assignment or training program. Upon completion (mastery) of a module, a Refresher Module and a Review Module will be presented to the learner. The Refresher Module allows the learner to re-take the module using a modified 1× correct algorithm. The Review Module displays the progress of a particular learner through a given assessment or learning module (a historical perspective for assessments or learning modules taken previously), with the display of ampObjects in that module sorted based on how much difficulty the learner experienced with each ampObject (those for which the learner experienced the greatest difficulty being listed first). The Review Content link is presented only for those individuals in the Reviewer role. The Assignments page also shows additional details about the module, including time to complete, as well as the optimal time to refresh each module in order to leverage the optimal point of synaptic potentiation along a calculated forgetting curve.
This may include progress dashboards displayed during a learning phase (including both tabular and graphical data; see
This may include a progress dashboard displayed after assessment (both tabular and graphical data; see
A reporting role (Analyst) is supported in various embodiments. In certain embodiments, the reporting function may have its own user interface or dashboard to create a variety of reports based on templates available within the system, such as through the Registration and Data Analytics (RDA) application. Standard and/or customized report templates may be created by an administrator and made available to any particular learning environment. Reports so configured can include the ability to capture the amount of time required by the learner to answer each ampObject and answer all ampObjects in a given module. Time is also captured for how much time is spent reviewing the answers. See e.g.
Automation of Content Upload: In accordance with other aspects, the systems described herein may be adapted to utilize various automated methods of adding ampObjects to the system. Code may be implemented within the learning system to read, parse and write the data into the appropriate databases. The learning system may also enable the use of scripts to automate upload from previously formatted data e.g. from csv or xml into the learning system. In addition, in some embodiments a custom-built rich-text-format template can be used to capture and upload the learning material directly into the system and retain formatting and structure.
In some embodiments, the learning system supports various standard types of user interactions used in most computer applications, for example, context-dependent menus appear on a right mouse click, etc. Some embodiments of the system also include several additional features such as drag and drop capabilities and search and replace capabilities.
Data Security: Aspects of the present invention and various embodiments use standard information technology security practices to safeguard the protection of proprietary, personal and/or other types of sensitive information. These practices include (in part) application security, server security, data center security, and data segregation. For example, for application security, each user is required to create and manage a password to access his/her account; the application is secured using https; all administrator passwords are changed on a repeatable basis; and the passwords must meet strong password minimum requirements. For example, for server security, all administrator passwords are changed on a pre-defined basis with a new random password that meets strong password minimum requirements, and administrator passwords are managed using an encrypted password file. For data segregation, the present invention and its various embodiments use a multi-tenant shared schema where data is logically separated using domain ID, individual login accounts belong to one and only one domain (including administrators), all external access to the database is through the application, and application queries are rigorously tested. In other embodiments, the application can be segmented such that data for selected user groups are managed on separate databases (rather than a shared tenant model).
A learning system constructed in accordance with aspects of the present invention uses various “Switches” in its implementation in order to allow the author or other administrative roles to ‘dial up’ or ‘dial down’ mastery that learner's must demonstrate to complete the modules. A “Switch” is defined as a particular function or process that enhances (or degrades) learning and/or memory. The functionality associated with these switches is based on relevant research in experimental psychology, neurobiology, and gaming. Examples of some (partial list) of the various switches incorporated into the learning system described herein are expanded upon below. The implementation of each switch will vary depending on the particular embodiment and deployment configuration of the present invention.
Repetition (Adaptive Repetition): An algorithmically driven repetition switch is used to enable iterative rounds of questioning to a learner in order to achieve mastery. In the classical sense, repetition enhances memory through the purposeful and highly configurable delivery of learning through iterative rounds. The Adaptive Repetition switch uses formative assessment techniques, and are in some embodiments combined with the use of questions that do not have forced-choice answers. Repetition in the present invention and various embodiments can be controlled by enforcing, or not enforcing, repetition of assessment and learning materials to the end-user, the frequency of that repletion, and the degree of chunking of content within each repetition. In other embodiments, the use of “shadow questions” are utilized in which the system requires that the learner demonstrate a deeper understanding of the knowledge associated with each question group. Because the ampObjects in a shadow question group are all associated with the same competency, display of the various shadow questions enables a more subtle yet deeper form of Adaptive Repetition.
Priming: Pre-testing aspects are utilized as a foundational testing method in the system. Priming through pre-testing initiates the development of some aspect of knowledge memory traces that is then reinforced through repetitive learning. Learning using aspects of the present invention opens up a memory trace with some related topic, and then reinforces that pathway and creates additional pathways for the mind to capture specific knowledge. The priming switch can be controlled in a number of ways in the present invention and its various embodiments, such as through the use of a formal pre-assessment, as well as in the standard use of formative assessment during learning.
Progress: A progress switch informs the learner as to his/her progress through a particular module, and is presented to the user in the form of a graphic through all stages of learning.
Feedback: A feedback switch includes both immediate feedback upon the submission of an answer as well as detailed feedback in the learning portion of the round. Immediate reflection to the learner as to whether he/she got a question right or wrong has a significant impact on attention of the learner and performance as demonstrated on post-learning assessments. The feedback switch in the present invention and various embodiments can be controlled in a number of ways, such as through the extent of feedback provided in each ampObject (e.g., providing explanations for both the correct and incorrect answers, versus only for the correct answers), or through the use of both summative assessments combined with standard learning (where the standard learning method incorporates formative assessment). In addition, in learning modules the learner is immediately informed as to the category of his/her response (e.g., confident and correct; partially sure and incorrect; etc.).
Context: A context switch allows the author or other administrative roles to simulate the proper or desired context, such as simulating the conditions required for application of particular knowledge. For example, in a module with 2× correct scoring, the author can configure the module to remove images or other information that is not critical to the particular question once the learner has provided a Confident+Correct response. The image or other media may be placed in either the introduction or in the question itself and may be deployed selectively during the learning phase or routinely as part of a refresher. The context switch in the present invention or various embodiments enables the author or administrator to make the learning and study environment reflect as closely as possible the actual testing or application environment. In practice, if the learner will need to recall the information without the help of a visual aid, the learning system can be adapted to present the questions to the learner without the visual aids at later stages of the learning process. If some core knowledge were required to begin the mastery process, the images might be used at an early stage of the learning process. The principle here is to wean the learner off of the images or other supporting but non-critical assessment and/or learning materials over some time period. In a separate yet related configuration of the context switch, the author can determine what percentage of scenario-based learning is required in a particular ampObject or module. The context switch can also be used to change the background image periodically, thus reducing any dependency on a specific look-and-feel in the application, which would likely eliminate more dependencies on visual aids within the application. The same technique may be used the change the layout of the answer key relative to the questions being asked within the learning environment.
Elaboration: This switch has various configuration options. For example, the elaboration switch allows the author to provide simultaneous assessment of both knowledge and certainty in a single response across multiple venues and formats. Elaboration may consist of an initial question, a foundational type question, a scenario-based question, or a simulation-based question. This switch requires simultaneous selection of the correct answer (recognition answer type) and the degree of confidence. In addition, the learner must contrast and compare the various answers before providing a response. It also provides a review of the explanation of both correct and incorrect answers. This may be provided by a text-based answer, a media-enhanced answer or a simulation-enhanced answer. Elaboration provides additional knowledge that supports the core knowledge and also provides simple repetition for the reinforcement of learning. This switch can also be configured to once (1×) correct (Proficiency) or twice (2×) correct (Mastery) levels of learning. In practice, the information being currently tested is associated with other information that the learner might already know or was already tested on. When thinking about something you already know, you can associate this bit of learning to elaborate and amplify the piece of information you are trying to learn. In the author role, the use of shadow questions as described above may be implemented in the elaboration switch as a deeper (elaborative) form of learning against a particular competency. The system also may provide enhanced support of differing simulation formats that provide the ability to incorporate testing answer keys into the simulation event. A more “app-like” user interface in the learning modules engages both the kinesthetic as well as cognitive and emotional domains of the learner. The addition of a kinesthetic component (e.g. dragging answers to the desired response box) further enhances long-term retention through higher order elaboration).
Spacing: A spacing switch in accordance with aspects of the present invention and various embodiments utilizes the manual chunking of content into smaller sized pieces that allow biological processes that support long term memory to take place (e.g. protein synthesis), as well as enhanced encoding and storage. This synaptic consolidation relies on a certain amount of rest between testing and allows the consolidation of memory to occur. The spacing switch can be configured in multiple ways in the various embodiments of the invention, such as setting the number of ampObjects per round of learning within a module, and/or the number of ampObjects per module. The spacing switch can also be utilized as a “Sleep Advisor” capacity, where after too many hours spent learning (which inhibits synaptic consolidation), the learner is advised to take a break and go to sleep, as they have reach an inflection point where the best thing they could do for learning is sleep instead of additional learning.
Certainty: A certainty switch allows the simultaneous assessment of both knowledge and certainty in a single response. This type of assessment is important to a proper evaluation of a learner's knowledge profile and overall stage of learning. Simultaneous evaluation of both knowledge (cognitive domain) and certainty (emotional domain) enhances long-term retention through the creation of memory associations in the brain. The certainty switch in accordance with aspects of the present invention and various embodiments can be formatted with a configuration of once (1×) correct (proficient) or twice (2×) correct (mastery).
Attention: An attention switch in accordance with aspects of the present invention and various embodiments requires that the learner provide a judgment of certainty in his/her knowledge (i.e. both emotional and relational judgments are required of the learner). As a result, the learner's attention is heightened. Chunking can also be used to alter the degree of attention required of the learner. For example, chunking of the ampObjects (the number of ampObjects per module, and the number of ampObjects displayed per round of formative assessment and learning) focuses the learner's attention on the core competencies and associated learning required to achieve mastery in a particular subject. In addition, provision of salient and intriguing feedback at desired stages of learning and/or assessment ensures that the learner is fully engaged in the learning event (versus being distracted by activities not associated with the learning event).
Motivation: A motivation switch in accordance with aspects of the present invention and various embodiments enables a learner interface that provides clear directions as to the learner's progress within one or more of the rounds of learning within any given module, course or curriculum, as a reflection of the currently learning state coupled with their initial declared motivating (objectives). The switch in the various embodiments can also display to the learner either qualitative (categorization) or quantitative (scoring) progress results to each learner.
Risk and Rewards: A risk/reward switch provides rewards according to a mastery-based reward schedule which triggers dopamine release and causes attention and curiosity in the learner. Risk is manifest because learners are penalized when a response is Confident & Incorrect or Partially Sure & Incorrect. The sense of risk can be heightened when a progress graphic is available to the user at all phases of learning. Risk is further enhanced when the learner is allowed to wager a certain amount of points on each correct or partially correct answer. Calculating the amount of points to wager on each question requires a heightened state of attention (and thus receptivity to learning) from the learner.
Desirable Difficulties refers to a concept where introducing certain changes in the learning environment that were originally considered undesirable may instead actually promote more effective learning. As cited by Robert and Elizabeth Bjork (see reference below), the system supports the following desirable difficulties. See, e.g. http://bjorklab.psych.ucla.edu/pubs/EBjork_RBjork—2011.pdf. Some examples of these desirable difficulties are described below.
Varying the Conditions of Practice.
Learning in the same physical environment causes the brain to associate the learning with the actual environment itself, causing retrieval to be more challenging in a different environment. The system encourages learners to use different variants of the platform, and will periodically suggest to learners to switch between the desktop (web) version and mobile version in order to mitigate any association of the learning with the physical environment. Furthermore, the layout and background images of the entire application window may change in order to further distinguish between the learning environment and the content being learned.
Spacing Practice—
While cramming may be effective for short-term retrieval, and the system will allow a short-term memory objective, spacing out the learning with scheduled refreshers, is the most effective way to facilitate long-term memory recall. The system supports an “optimal time to refresh” parameter, showing the next time the learner should take a refresher module, which optimizes the amount of time that a learner has to spend studying, while providing the greatest long-term memory benefits.
Interleaving—
Showing seemingly unrelated material in learning has been shown to be more effective than teaching all related material at once (known as blocking). It is believed that this is largely because learners can focus on the differences between the learning materials instead of the similarities. Through tagged and classified content, the system can generate a dynamic module that includes seemingly unrelated (in actuality it is very loosely related) material from a previous learning event, in order to best leverage the interleaving effect. The system also supports an algorithmic parameter to promote the interleaving effect.
Disfluency—
Using text and fonts that are slightly harder to read has been shown to result in deeper cognitive processing, resulting in improved memory performance. The system allows for an administrator-defined disfluency parameter, enabling some fonts to be skewed/kerned, or in some cases substitutions of the fonts for certain answer choices. Due to the nature of software that is sometimes perceived as buggy, the learner is advised that these textual changes may be enacted in order to facilitate more effective learning. For example see http://www.ncbi.nlm.nih.gov/pubmed/21040910.
Aspects of the present invention and various embodiments include a built-in registration capability whereby user accounts can be added or deleted from the system, users can be placed in an ‘active’ or ‘inactive’ state, and users (via user accounts) can be assigned to various assessment and learning programs in the system. In the current embodiment of the invention, registration is managed in the Registration and Data Analytics application. In an earlier embodiment, registration was managed in the three-tier unified application system. Registration can also be managed in external systems (such as a Learning Management System or portal), and that registration information is communicated to the system through technical integration.
Aspects of the present invention and various embodiments have the capability of operating as a stand-alone application or can be technically integrated with a third-party Learning Management Systems (“LMS”). Learners that have various assessment and learning assignments managed in the LMS can launch and participate in assessment and/or learning within the system with or without single sign-on capability. The technical integration is enabled through a variety of industry standard practices such as Aviation Industry CBT Committee (AICC) interoperability standards, Learning Tools Interoperability (LTI) standards, http posts, web services, and other such standard technical integration methodologies.
In various embodiments of the system, an avatar with succinct text messages is displayed to provide guidance to the learner on an as-needed basis. The nature of the message, and when or where the avatar is displayed, is configurable by the administrator of the system. It is recommended that the avatar be used to provide salient guidance to the user. For example, the avatar can be used to provide guidance regarding how the switches (described above) impact the learning from the respect of the learner. In the present invention, the avatar is displayed only to the learner, not the author or other administrative roles in the system. The Avatar can also be used to intervene if a learner is following a learning path that shows a significant level of disengagement from the system.
Structure of ampObject Libraries and Assignments
Also included is a module library 1807 that contains the configuration options for the operative algorithms as well as information relating to a Bloom's level, the application, behaviors, and additional competencies. An administrator or author may utilize these structures in the following manner. First, an ampObject is created at 1802, key elements for the ampObject are built at 1803, and the content and media is assembled into an ampObject at 1804. Once the ampObject library 1801 is created, the module 1807 is created by determining the appropriate ampObjects to include in the module. After the module is created, the learning assignment is published. Alternatively, the ampObjects within a Curriculum are made available and a dynamic module is created by the instructor or the learner. See
Referring back for example to
Content Management System Roles:
CMS enables certain roles within the system, including content author, content manager, resource librarian, publisher, translator, reviewer and CMS administrator. The content author role provides the ability to create learning objects and maintain them over time. The resource librarian role provides the ability to manage a library of resources that can be used to create content for the learner. The translator role provides the ability to translate content into another language and otherwise adjust the system for the locale where the system is being administered. The content manager role provides the ability to manage a staff of authors, resource librarians and translators. The publisher role provides the ability to manage the organizational structure of the curriculum, and to decide when to publish works and when to prepare new versions of existing works. The reviewer role provides the ability to provide feedback on content prior to publication. The CMS administrator role provides the ability to configure the knowledge assessment system for use within any particular organization.
Content Author's Goals: The content author is adapted to provide several functions including one or more of the following:
Content Resource Librarian's Goals: The content resource librarian is adapted to provide several functions including one or more of the following:
Content Translator's Goals: The content translator is adapted to provide several functions including one or more of the following:
As used above, “Translation” is the expression of existing content in another language. “Localization” is fine-tuning of a translation for a specific geographic (or ethnic) area. By way of example, English is a language; US and UK are locales, where there are some differences in English usage in these two locales (spelling, word choice, etc.).
Content Manager's Goals: The content manager is adapted to provide several functions including one or more of the following:
Content Publisher's Goals: The content publisher is adapted to provide several functions including one or more of the following:
Content Reviewer's Goals: The content reviewer is adapted to provide several functions including one or more of the following:
CMS Administrator Goals: The CMS administrator is adapted to provide several functions including one or more of the following:
Learning System Roles: The learning system or application 950 generally provides the ability to complete assignments and master content to a particular learner.
Learner's Goals: The learner is adapted to provide several functions including one or more of the following:
Registration and Data Analytics (RDA) Roles:
RDA 308 enables certain roles within the system, including that of a registrar, an instructor, an analyst and an RDA administrator. The role of the registrar is to administer learner accounts and learner assignments in the system. The goal of the instructor is to view information regarding all students, a subset of students or a student's results. The goal of the analyst is to understand learner performance and activity for a particular organization or individual. The goal of the RDA administrator is to configure the RDA for use within any particular organization.
Registrar's Goals: The registrar is adapted to provide several functions including one or more of the following:
Instructor's Goals: The instructor is adapted to provide several functions including one or more of the following:
Analyst's Goals: The analyst is adapted to provide several functions including one or more of the following:
RDA Administrator's Goals—The RDA administrator is adapted to provide several functions including one or more of the following:
Addition System Goals and Roles: The knowledge management system may also include one or more of the following functions and capabilities:
Memory 1910 may include various components (e.g., machine readable media) including, but not limited to, a random access memory component (e.g., a static RAM “SRAM”, a dynamic RAM “DRAM, etc.), a read only component, and any combinations thereof. In one example, a basic input/output system 1920 (BIOS), including basic routines that help to transfer information between elements within computer system 1900, such as during start-up, may be stored in memory 1910. Memory 1910 may also include (e.g., stored on one or more machine-readable media) instructions (e.g., software) 1925 embodying any one or more of the aspects and/or methodologies of the present disclosure. In another example, memory 1910 may further include any number of program modules including, but not limited to, an operating system, one or more application programs, other program modules, program data, and any combinations thereof.
Computer system 1900 may also include a storage device 1930. Examples of a storage device (e.g., storage device 1930) include, but are not limited to, a hard disk drive for reading from and/or writing to a hard disk, a magnetic disk drive for reading from and/or writing to a removable magnetic disk, an optical disk drive for reading from and/or writing to an optical media (e.g., a CD, a DVD, etc.), a solid-state memory device, and any combinations thereof.
Storage device 1930 may be connected to bus 1915 by an appropriate interface (not shown). Example interfaces include, but are not limited to, SCSI, advanced technology attachment (ATA), serial ATA, universal serial bus (USB), IEEE 1394 (FIREWIRE), and any combinations thereof. In one example, storage device 1930 may be removably interfaced with computer system 1900 (e.g., via an external port connector (not shown)). Particularly, storage device 1930 and an associated machine-readable medium 1935 may provide nonvolatile and/or volatile storage of machine-readable instructions, data structures, program modules, and/or other data for computer system 1900. In one example, software 1925 may reside, completely or partially, within machine-readable medium 935. In another example, software 1925 may reside, completely or partially, within processor 1905. Computer system 1900 may also include an input device 1940. In one example, a user of computer system 1900 may enter commands and/or other information into computer system 1900 via input device 1940. Examples of an input device 1940 include, but are not limited to, an alpha-numeric input device (e.g., a keyboard), a pointing device, a joystick, a gamepad, an audio input device (e.g., a microphone, a voice response system, etc.), a cursor control device (e.g., a mouse), a touchpad, an optical scanner, a video capture device (e.g., a still camera, a video camera), touch-screen, and any combinations thereof. Input device 1940 may be interfaced to bus 1915 via any of a variety of interfaces (not shown) including, but not limited to, a serial interface, a parallel interface, a game port, a USB interface, a FIREWIRE interface, a direct interface to bus 1915, and any combinations thereof.
A user may also input commands and/or other information to computer system 1900 via storage device 1930 (e.g., a removable disk drive, a flash drive, etc.) and/or a network interface device 1945. A network interface device, such as network interface device 1945 may be utilized for connecting computer system 1900 to one or more of a variety of networks, such as network 1950, and one or more remote devices 1955 connected thereto. Examples of a network interface device include, but are not limited to, a network interface card, a modem, and any combination thereof. Examples of a network or network segment include, but are not limited to, a wide area network (e.g., the Internet, an enterprise network), a local area network (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a direct connection between two computing devices, and any combinations thereof. A network, such as network 1950, may employ a wired and/or a wireless mode of communication. In general, any network topology may be used. Information (e.g., data, software 1925, etc.) may be communicated to and/or from computer system 1900 via network interface device 1945.
Computer system 1900 may further include a video display adapter 1960 for communicating a displayable image to a display device, such as display device 1965. A display device may be utilized to display any number and/or variety of indicators related to pollution impact and/or pollution offset attributable to a consumer, as discussed above. Examples of a display device include, but are not limited to, a liquid crystal display (LCD), a cathode ray tube (CRT), a plasma display, and any combinations thereof. In addition to a display device, a computer system 1900 may include one or more other peripheral output devices including, but not limited to, an audio speaker, a printer, and any combinations thereof. Such peripheral output devices may be connected to bus 1915 via a peripheral interface 1970. Examples of a peripheral interface include, but are not limited to, a serial port, a USB connection, a FIREWIRE connection, a parallel connection, and any combinations thereof. In one example an audio device may provide audio related to data of computer system 1900 (e.g., data representing an indicator related to pollution impact and/or pollution offset attributable to a consumer).
A digitizer (not shown) and an accompanying stylus, if needed, may be included in order to digitally capture freehand input. A pen digitizer may be separately configured or coextensive with a display area of display device 1965. Accordingly, a digitizer may be integrated with display device 1965, or may exist as a separate device overlaying or otherwise appended to display device 1965. Display devices may also be embodied in the form of tablet devices with or without touch-screen capability.
The confidence-based assessment can be used as a confidence-based certification instrument, both as a pre-test practice assessment, and as a learning instrument. As a pre-test assessment, the confidence-based certification process would not provide any remediation, but only provide a score and/or knowledge profile. The confidence-based assessment would indicate whether the individual had any confidently held misinformation in any of the certification material being presented. This would also provide, to a certification body, the option of prohibiting certification where misinformation exists within a given subject area. Since the CBA method is more precise then current one-dimensional testing, confidence-based certification increases the reliability of certification testing and the validity of certification awards.
In the instance where the system is used as a learning instrument, the learner can be provided the full breadth of formative assessment and learning manifest in the system to assist the learner in identifying specific skill gaps, filling those gaps remedially, and/or preparing for a third-party administered certification exam.
The confidence-based assessment can apply to adaptive learning approaches in which one answer generates two metrics with regard to confidence and knowledge. In adaptive learning, the use of video or scenarios to describe a situation helps the individual work through a decision making process that supports his/her learning and understanding. In these scenario-based learning models, individuals can repeat the process a number of times to develop familiarity with how they would handle a given situation. For scenarios or simulations, CBA and CBL adds a new dimension by determining how confident individuals are in their decision process. The use of the confidence-based assessment using a scenario-based learning approach enables individuals to identify where they are uninformed and have doubts in their performance and behavior. Repeating scenario-based learning until individuals become fully confident increases the likelihood that the individuals will act rapidly and consistently as a result of their training. CBA and CBL are also ‘adaptive’ in that each user interacts with the assessment and learning based on his her own learning aptitude and prior knowledge, and the learning will therefore be highly personalized to each user.
The confidence-based assessment can be applied as a confidence-based survey instrument, which incorporates the choice of three possible answers, in which individuals indicate their confidence in and opinion on a topic. As before, individuals select an answer response from seven options to determine their confidence and understanding in a given topic or their understanding of a particular point of view. The question format would be related to attributes or comparative analysis with a product or service area in which both understanding and confidence information is solicited. For example, a marketing firm might ask, “Which of the following is the best location to display a new potato chip product? A) at the checkout; B) with other snack products; C) at the end of an aisle.” The marketer is not only interested in the consumer's choice, but the consumer's confidence or doubt in the choice. Adding the confidence dimension increases a person's engagement in answering survey questions and gives the marketer richer and more precise survey results.
Further aspects in accordance with the present invention provide learning support where resources for learning are allocated based on the quantifiable needs of the learner as reflected in a knowledge assessment profile, or by other performance measures as presented herein. Thus, aspects of the present invention provide a means for the allocation of learning resources according to the extent of true knowledge possessed by the learner. In contrast to conventional training where a learner is generally required to repeat an entire course when he or she has failed, aspects of the present invention disclosed herein facilitate the allocation of learning resources such as learning materials, instructor and studying time by directing the need of learning, retraining, and reeducation to those substantive areas where the subject is misinformed or uninformed.
Other aspects of the invention effected by the system offers or presents a “Personal Training Plan” page to the user. The page displays the queries, sorted and grouped according to various knowledge regions. Each of the grouped queries is hyper-linked to the correct answer and other pertinent substantive information and/or learning materials on which the learner is queried. Optionally, the questions can also be hyper-linked to online informational references or off-site facilities. Instead of wasting time reviewing all materials covered by the test query, a learner or user may only have to concentrate on the material pertaining to those areas that require attention or reeducation. Critical information errors can be readily identified and avoided by focusing on areas of misinformation and partial information.
To effect such a function, the assessment profile is mapped or correlated to the informational database and/or substantive learning materials, which is stored in the system or at off-system facilities such as resources within an organization's local area network (LAN) or in the World Wide Web. The links are presented to the learner for review and/or reeducation.
In addition, the present invention further provides automated cross-referencing of the test queries to the relevant material or matter of interest on which the test queries are formulated. This ability effectively and efficiently facilitates the deployment of training and learning resources to those areas that truly require additional training or reeducation.
Further, with the present invention, any progress associated with retraining and/or reeducation can be readily measured. Following a retraining and/or reeducation event, (based on the prior performance results) a learner could be retested with portions or all of test queries, from which a second knowledge profile can be developed.
In all the foregoing applications, the present method gives more accurate measurement of knowledge and information. Individuals learn that guessing is penalized, and that it is better to admit doubts and ignorance than to feign confidence. They shift their focus from test-taking strategies and trying to inflate scores toward honest self-assessment of their actual knowledge and confidence. This gives subjects as well as organizations rich feedback as to the areas and degrees of mistakes, unknowns, doubts and mastery. Having now fully set forth the preferred embodiments and certain modifications of the concept underlying the present invention, various other embodiments as well as certain variations and modifications of the embodiments herein shown and described will obviously occur to those skilled in the art upon becoming familiar with the underlying concept. It is to be understood, therefore, that the invention may be practiced otherwise than as specifically set forth herein.
This application is a Continuation-In-Part of U.S. patent application Ser. No. 13/216,017 filed on Aug. 23, 2011, which is a Continuation-In-Part of U.S. patent application Ser. No. 13/029,045 filed Feb. 16, 2011. This application is also related to U.S. patent application Ser. No. 12/908,303, filed on Oct. 20, 2010, U.S. patent application Ser. No. 10/398,625, filed on Sep. 23, 2003, U.S. patent application Ser. No. 11/187,606, filed on Jul. 23, 2005, and U.S. Pat. No. 6,921,268, issued on Jul. 26, 2005. The details of each of the above listed applications are hereby incorporated by reference into the present application by reference and for all proper purposes.
Number | Date | Country | |
---|---|---|---|
Parent | 13216017 | Aug 2011 | US |
Child | 14155439 | US |