Embodiments of the present invention relate to computer-based instruction.
Computer-based instruction involves the presentation of instructional/educational content to a user by means of a computer. The educational content may be embodied in a software program that presents the educational content to the user in an interactive manner.
According to a first aspect of the invention, there is provided an adaptive method for adapting computer-based instruction in the form of lessons to suit an individual learner. In one embodiment, the adaptive method comprises making observations about the learning behavior of the student, and using heuristics to imply an assessment of the learner's performance in terms of one or more performance or assessment criteria/axes, based on the observations. The assessment is then used to drive or control adaptation of the lesson.
In one embodiment, the assessment and the adaptation occur continuously. Thus, advantageously, the adaptive method allows adaptation of a lesson while a learner is interacting with the lesson.
In some embodiments, the assessment axes may include the following:
In one embodiment, the adaptive method comprises providing a mechanism for teachers to describe how they expect students of varying levels of developmental understanding to perform for a given set of questions. This mechanism, referred to herein as the “expectation matrix” can utilize as many of the above assessment axes as the teacher feels are relevant for a question. In one embodiment, student responses on the varying axes are not taken in isolation, but rather are used in combination to determine an overall score.
Corresponding to each level of development understanding defined in the expectation matrix, in one embodiment, there is a corresponding set of adaptation control parameters to control adaptation of a lesson for a learner determined to fall within that level of development understanding.
Adaptation of a lesson may be in accordance with one or more adaptation criteria or adaptation axes. In one embodiment, the adaptation criteria include the following:
In one embodiment, an adaptation profile maps a desired order and combination of adaptation axes to a particular learner based on the aforesaid overall score for the learner.
According to a second aspect of the invention, there is provided a system to implement the adaptive method.
Other aspects of the invention will be apparent from the detailed description below:
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art, that the invention can be practiced without these specific details.
Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.
Embodiments of the present invention disclose an adaptive learning method whereby lessons are adapted to ensure suitability to a particular learner. Within the context of the present invention, lessons teach a variety of subjects such as math, science, history, languages, etc. Lessons may comprise problems, each associated with a particular skill or micro-objective (
A glossary of terms useful for understanding the present invention is provided in Appendix A.
More detail on possible axes of assessment is provided in Appendix B.
In one embodiment, the assessment of the student's learning behavior is embodied in one or more scores 110 that are the output of the assessment process 106. The scores are indicative of the student's learning developmental level and are determined based on heuristics 108.
Since the assessment process 106 uses the data generated by the observation process 100, the type of data that is collected/generated by the observation process 100 is based, at least in part, on the particular assessment axes 104 the assessment process 106 is configured to assess.
Advantageously, a system implementing the adaptive method of
Continuing with
In one embodiment, the particular lessons within the subset 116 may themselves be adapted under the adaptation process 112. More particularly, the adaptation process 112 uses an expectation matrix 122 and the scores 110 to generate an adaptation profile 118. In one embodiment, the expectation matrix 122 describes how teachers expect students of varying levels of understanding to perform for a given set of questions within a lesson. An example of the expectation matrix 122 is provided in
The expectation matrix 200 shown in
Aspects of the above-described adaptive learning method may be performed by a client learning system communicatively coupled to a server learning system, as is illustrated in
In one embodiment, the server learning system 300 may include one or more servers each including server hardware 302 and server software 304. The particular components of the server hardware 302 and the server software 304 will vary in accordance with different implementations. One example of the hardware 302 and the software 304 used to realize the server system 300 is provided in
The client learning system 310 represents any device such as a desktop or laptop computer, a mobile phone, a Personal Digital Assistant (PDA), an embedded system, a server appliance etc. Generically, the client learning system 310 includes client hardware 308 and client software 310 and may be implemented as the system 1000 described below with reference to
In one embodiment, the communications network may comprise a Wide Area Network (WAN), to support communications between the server learning system 300 and the client learning, system 306 in accordance with different communications protocols, By way of example, the communications network may support the Transmission Control Protocol over the Internet Protocol (TCP/IP). Thus, the communications network 312 may comprise the Internet.
In one embodiment, a learner (also referred to herein as “a student” or “user”) downloads software from the server learning system 300 over the communications network 312. The term “software” is used herein to indicate one or more software programs comprising instructions that are machine-executable or virtual machine-executable, as well as data associated with the execution of the programs. In one embodiment, the software may be downloaded from the server learning system 300. In other embodiments, the software may include executable instructions pre-installed on the client adaptive learning system.
Each lesson when executing on the client learning system 306 has a lesson runtime or execution environment.
In one embodiment, the assessment manager 412 performs the assessment process 106 by computing a Question Score upon the completion of a question (i.e. there is no opportunity for the student to make any further changes) based on the metrics received from the reporters 410. The Question Scores may be in the range of 0 to 100.
Each question posed in a lesson assesses a specific micro-objective. (Where two or more questions are asked in parallel, two or more micro-objective will be assessed). Thus, a Question Score is the score(s) for the micro-objective(s) associated with a lesson. In accordance with the embodiments of the present invention, in determining a Question Score, the assessment manager 412 generates a value based on at least the responses for each assessment axis, weighted by a teacher-supplied value, a difficulty level of the question, and an assistance score. Notionally, the Question Score for a particular question may be regarded as the maximum possible score for that question adjusted by the type and quantity of the mistakes made and assistance provided.
In one embodiment, the maximum possible score for a question is calculated as:
(CAS*D)
The values CAS and D are assigned by a teacher and are independent variables.
By way of example, and in one embodiment for a correct answer, the following is used to calculate the Question Score:
QS=(CAS*D)−WM*MS−WA*AS+WR*RS+WS*SS
Appendix C describes how MS, AS, RS, SS, and their respective weightings are computed, in one embodiment. The learner's scores for each assessment category (i.e. the values MS, AS, RS, and SS) in the above formula are modified by weighting values that allow for fine tuning of how a series of lessons evaluate similar responses where expectations if student performance differ. For example, there may be two lessons, viz. Lesson 1 and Lesson 2, with the questions of Lesson 2 being more difficult than the questions of Lesson 1. Given the difference in the difficulty of the questions in the two lessons, a teacher would expect a student to make more mistakes in Lesson 2. Moreover, the Lesson 2 may be configured to provide more assistance to a student. Thus, a lower weighting for mistakes and assistance may be set for Lesson 2 than for Lesson 1. The weighting values are a combination of at least two separate values: one supplied by the author of the lesson, and the other generated by the system which is used to optimize the weighting effectiveness over time.
To illustrate how Question Scores are calculated, consider the expectation matrix 500 shown in
For the illustrative purposes, in the matrix 500, a novice is given zero points, an apprentice one point, a practitioner two points, and an expert three points. These values are supplied by a teacher. The teacher also supplies the weights for each axis of assessment. Using the above formula to calculate the Question Score, the matrix 500 yields a Question Score of 76 for a value D of 1.0.
Using an expectation matrix 122 and a formula similar to the one described to determine a Question Score; a teacher can determine an expected Question Score for a learner in each of the listed developmental categories described above. In accordance with one embodiment, a difference between the actual Question Score and the expected Question Score based on the learner's developmental level can be used to perform intra-lesson adaptations during execution of a lesson on the client learning system, as will be described.
After each question is answered, in one embodiment, both Current Performance and Micro-objective scores are calculated. These provide, respectively, a general indication of how the student is performing on the lesson overall at that moment, and how well the student is responding to questions of either a specific type or covering specific subject matter. Both the Current Performance and the Micro-Objective Scores for a particular student represents a mastery quotient for subject matter that a lesson is designed to teach.
Both these scores are generated by calculating a weighted average of the last N Question Scores.
The Current Performance Score looks back over all recent answers of all types, while the Micro-objective Score is based upon answers to questions of a single type.
Only the last N Question Scores are used when generating these derived scores for the following reasons:
There are two specific ways of processing Question Scores: One treats the scores obtained when answering each question as absolute and does not take into account what the possible maximum was. The other essentially adjusts the accumulated score in relation to what was possible for each question.
Which approach is used is determined by the type of lesson. The majority of lessons contain phases where there are multiple problems and either one or a few questions per problem. Some lessons, however, contain a single problem with multiple questions, often of differing difficulty levels. The former case usually requires questions of lower difficulty to be assessed at a lower level. The latter, however, may require that regardless of the difficulty of each individual question, the overall score should be the nominal maximum (100) if no mistakes were made. Even if the individual scores were 80, 80, 80, 80 for a set of questions where the maximum score possible—adjusted, for example, for difficulty—for each was 80.
The formula to calculate either the Current Performance or Micro-objective Scores when all Question Scores are treated independently (the former case) is:
The formula to calculate either the Current Performance or Micro-objective Scores when all questions within a problem must be taken as a whole (the latter case) is shown below. Note that the value ‘N ’ in this case should be equal to the number of questions asked in the problem (and therefore may be variable on a per-problem basis).
The following are examples of possible weighting tables. The first weights the latest question score (as represented by the right-most position in the table) as 25% more significant than the three preceding it. The second treats the three most recent scores equally and then gradually reduces the impact of scores previous to those:
It should be noted that for a given value of N, the two formulas produce differing result only when the difficulty levels of the questions asked varies.
It will be appreciated that each Question Score represents a heuristic used to assess a student's level of developmental understanding.
A flowchart of intra-lesson adaptation, in accordance with one embodiment is shown in
The steps in the flowchart of
The adaptation manager 414 adapts the lesson (this is termed “lesson level adaptation”) using initial adaptation control parameters 416 (see
As can be seen the adaptation parameters 416 are set based on expected Question Scores and include changes in the level of instruction, the level of assistance, the minimum and maximum distances between the numbers being compared, etc. Another example of a lesson level adaptation includes weighting/rebalancing “choosers”. Choosers are used by the lesson infrastructure to choose quasi-randomly between a limited set of choices. Rebalancing a chooser changes the probability that each choice might be chosen. Possible choices might include things such as which tools might be available, or which operation (e.g. subtraction, addition, equality, etc.) is to be tested in the next question. Another type of lesson level adaptation may be transitioning the lesson to a different state. Yet another type of lesson level adaptation may be enabling/disabling lower-level (more specific) adaptations.
Problem context includes the many individual user interface (UI), tool and generator (a tool used to generate a problem) configurations that make a set of problems, as presented to the student.
This could, for example, involve using a chooser to select the type of operation to be performed.
Adaptation is performed if a difference between an expected score and a calculated score is above a certain threshold. Possible adaptations or axes of adaptation include changes in the following:
This is done by the assessment manager as described above.
This may involve telling the student that the answer is correct/incorrect and perhaps providing some hints or remediation material to help the student.
Some lessons give partial credit when students correct their work after feedback.
This is performed by the assessment manager in accordance with the techniques described above.
Referring now to
The effectiveness factor is a measure of how effective a lesson is at teaching certain skills and/or concepts (micro-objectives). As such, the effectiveness factor can be influenced by a variety of factors which may include: the teaching approach used, the learning styles of the students, how well the lesson author executed in creating the lesson, etc. When there are multiple lessons, each attempting to teach and assess the same micro-objectives, the effectiveness of each, for a given group of learners, can be calculated by observing the scores obtained in subsequent common lessons that either require or build upon the skills taught in the preceding lessons. This effectiveness is expressed as the Effectiveness Factors for a lesson which are used to adjust the Micro-Objective Scores obtained from the previous lessons to ensure that they accurately represent the skills of the student and are therefore more accurate predictors of performance in subsequent lessons.
In one embodiment the Effectiveness Factors for a group of lessons are calculated by the system using the scores for all students who have completed those lessons and have also completed one or more common subsequent lessons. One possible algorithmic approach for doing this is as follows:
The above steps may be repeated at defined intervals in order to re-calculate the Effectiveness Factors for a lesson. In some cases, the Effectiveness Factors for a lesson may be adjusted or updated based on the re-calculated values.
In another embodiment the students may additionally be first divided into groups that emphasize their similarities and a process—perhaps very similar to that described above—is then run for each group. This would result in multiple Effectiveness Factors per lesson, one per group. In another embodiment students could be placed successively into multiple groups and the process run multiple times for each combination of groups.
In another embodiment the algorithm may include not just the immediate predecessor lessons, but also any sets of equivalent lessons that may have preceded them.
In one embodiment, these are lessons that have an Effectiveness Factor for a particular micro-objective that is above a threshold, say 85%. Within a curriculum, a key lesson may be highly desirable in contrast with a lesson with an Effectiveness Factor of less than say 50%. In some embodiments, lessons with Effectiveness Factors of less than a threshold, say 30%, may be removed from the system.
In another embodiment it may be determined that students with certain learning styles (e.g. Visual vs. Auditory) may all perform better when presented with one style of lesson rather than another. In those cases each of the group of similar lessons may have more than one Effectiveness Factor—one for each group of students that share a common learning style where there is an observed different level of effectiveness.
Referring now to
This component implements functionality to authenticate a learner to the system, e.g. by user name and password.
This component is responsible for sending lessons to a student for execution on the client adaptive learning system.
In one embodiment, the adaptation manager 906 scales the Question Scores received from a client adaptive learning system to yield a lesson-independent Micro-Objective Score. A formula for computing the lesson-independent Micro-Objective Score is provided in Appendix D. The adaptation manager 906 includes an analysis engine 908 that is responsible for analyzing the Question Scores for a population of students. The analysis engine also calculates the Effectiveness Factors described above.
This component controls execution within each of the components of the environment 900.
The environment 900 includes one or more databases 912. These include a lessons database 914, and a database 916 of student profiles which comprise the adaptation control parameters for each student.
The hardware 1000 also typically receives a number of inputs and outputs for communicating information externally. For interface with a user or operator, the hardware 1000 may include one or more user input devices 1006 (e.g., a keyboard, a mouse, etc.) and a display 1008 (e.g., a Liquid Crystal Display (LCD) panel). For additional storage, the hardware 1000 may also include one or more mass storage devices 1010, e.g., a floppy or other removable disk drive, a hard disk drive, a Direct Access Storage Device (DASD), an optical drive (e.g. a Compact Disk (CD) drive, a Digital Versatile Disk (DVD) drive, etc.) and/or a tape drive, among others. Furthermore, the hardware 1000 may include an interface with one or more networks 1012 (e.g., a local area network (LAN), a wide area network (WAN), a wireless network, and/or the Internet among others) to permit the communication of information with other computers coupled to the networks. It should be appreciated that the hardware 1000 typically includes suitable analog and/or digital interfaces between the processor 1002 and each of the components 1004, 1006, 1008 and 1012 as is well known in the art.
The hardware 1000 operates under the control of an operating system 1014, and executes various computer software applications, components, programs, objects, modules, etc. indicated collectively by reference numeral 1016 to perform the above-described techniques. In the case of the server system 300 various applications, components, programs, objects, etc. may also execute on one or more processors in another computer coupled to the hardware 1000 via a network 1012, e.g. in a distributed computing environment, whereby the processing required to implement the functions of a computer program may be allocated to multiple computers over a network.
In general, the routines executed to implement the embodiments of the invention, may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as “computer programs.” The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects of the invention. Moreover, while the invention has been described in the context of fully functioning computers and computer systems, those skilled in the art will appreciate that the various embodiments of the invention are capable of being distributed as a program product in a variety of forms, and that the invention applies equally regardless of the particular type of machine or computer-readable media used to actually effect the distribution. Examples of computer-readable media include but are not limited to recordable type media such as volatile and non-volatile memory devices, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks, (DVDs), etc.), among others, and transmission type media such as digital and analog communication links.
Although the present invention has been described with reference to specific exemplary embodiments, it will be evident that the various modification and changes can be made to these embodiments without departing from the broader spirit of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than in a restrictive sense.
A variety of scores are generated and used by the system. They include:
Each axis of assessment will now be discussed together with how the various categories of learners are expected to perform for that axis of assessment. In most cases, the observed data collected by the various reporters as part of the observation process 100 for a particular axis of assessment will be apparent from the discussion for that axis of assessment.
1. Number of Interactions
In one embodiment, there are an optimal number of moves or interactions that the client learning system allows for a learner to provide an answer to a question. The number of interactions may be an indicator of the strategy that a learner is using. For example, a lower performing student may take more moves than necessary to answer a question, either because they make mistakes or because they do not use a more elegant/efficient strategy to answer the question. By way of example suppose the question was to represent the number four on a ten frame i.e. a box that had 10 holes to fill in. A student may decide to take four single counters and place them each in four cells on that ten frame. Alternatively, the student could make four using a block of three counters and a single counter, or two blocks of each having two counters. So if the student used single counters and they placed each one of those in the correct locations, they would take four moves. If they took two lots of two and placed them in the correct locations, they would have two moves. Thus, the optimal number of moves or interactions in this case is two.
2. Mistakes while Answering
In many cases, the client learning system will guide a student to a correct answer. Thus, keeping track of how many times a student got the right answer would not accurately reflect the student's mastery of the subject matter being taught by the question. Thus, in one embodiment the reporters keep track of the number of mistakes made while answering questions.
3. Types of Mistakes
In one embodiment, the reporters also track and report the types of mistakes made while answering a question. In the above, where the problem was to make the number four by moving a series of counters, one of the mistakes could be taking too many interactions.
For example, in one question, the client learning system could ask “what is two plus two”?, and may provide a digit line with the numbers one through 10 as buttons for the student to click on to indicate the answer. If the student clicks on the number three, they are one unit away from the correct answer. This is an “off by one” mistake and quite different to the situation where the student clicked on the number 9. In one embodiment, the reporters track and report “off by one” mistakes to the assessment manager 412.
In one embodiment, the assessment manager uses a variety of algorithms to evaluate the mistakes made in order to work out how close or how far away a student is to getting a correct answer. For example, in some cases, the correct answer is three and a student clicks on eight which may be indicative of a common digit substitution problem where the student is mistaking the number three for the number eight. Other common digit substitution errors include mistaking two and five, and six and nine.
In embodiment, digit substitution errors are tracked and reported to the assessment manager 412.
In cases where a student is making digit substitution errors, the lesson may be adapted to provide assistance to overcome this type of error.
4. Requires Assistance
When answering a question a student may request assistance by clicking on a “help” button, responsive to which the client learning system provides assistance to the student to help the student answer the question correctly. Naturally, the value of an ultimately correct answer as an indicator of subject matter mastery is diminished by the quality and the quantity of the assistance provided. Thus, the reporters may track the quality and the quantity of the assistance provided to a student, in one embodiment.
5. Self Corrects
Lessons presented by the client learning system usually have a “button” that a student selects to submit their answer. For example, a student may place one or more tiles on a ten frame to build a number, and then will select the submit button to indicate to the client learning system that that is their final answer. When in some cases, after placement of the tiles on the ten frame, a student will realize that they have made a mistake and will change their answer by correcting the mistake before clicking the submit button. In one embodiment, the reporter tracks when a student self corrects.
6. Uses Resets when Available
Reset allows a student to reset a question so that the student may begin answering the question anew. In the case of a reset, a student has realized that a question may be answered in a “better” way. A novice usually never uses reset because they basically do not realize they are making mistakes and not answering the question in an optimal way. An expert never has to use reset because they're always answering correctly. A practitioner, which is someone who's not quite an expert, but getting there, might use one reset now and then because they'll think “Oops, I know I made a mistake, I could've done that in a better way, I'm going to try it again.” An apprentice, who is someone who is just starting to understand what's going on but is definitely a level above novice will realize that they're making mistakes but they haven't worked out yet what's the optimal way to do it is and may use reset one or two times to try and work out what is the optimal way of doing things.
7. Closeness to Correct
Given the nature of the mistakes a particular learner is making, under this axis of assessment the assessment process 106 is able to assess how close they are to being correct.
8. Demonstrates Developmental level of Understanding
Under this axis of assessment, the assessment process 106 is seeking to assess whether a student is demonstrating developmental level of understanding of the subject matter being taught. For example, a novice and apprentice may be expected to move counters in serial-fashion one at a time, whereas a practitioner or expert may be expected to move counters in groups. Likewise, a novice and apprentice may be expected to move a pointer/mouse over each counter thereby counting each counter that constitutes the answer, whereas a practitioner or expert might be expected to move the pointer directly to the answer.
9. Responsiveness
For this axis of assessment, the reporters collect timing data that measures how long it takes a student to answer a question. This axis of assessment is primarily used when the expected behavior of novices is to usually take more time to answer questions than experts (assuming they are not guessing).
The axes of assessment 1-7 discussed thus far apply to individual questions. The axes of assessment 10-12 discussed below apply across a series of questions.
10. Answers Correctly
Under this axis of assessment, the reporters track a student's answers across a whole series of questions.
11. Mistakes
Reporters track the mistakes made by a student across a whole series of questions.
12. Handles Increases in Difficulty
For this axis of assessment, the assessment process 106 evaluates how a student responds to increases in the difficulty level of questions. For example, it is expected that a novice's responsiveness will decrease dramatically with corresponding increases in question difficulty. Thus, a chart of difficulty vs. responsiveness will have a hockey stick like appearance for a novice. As a student's developmental level approaches that of an expert, it is expected that there will be a minimum impact in responsiveness for increases in question difficulty.
As described the observation process and the assessment process is performed by the client learning system, and involve tools, reporters and the assessment manager. What follows is a description of how the individual scores that are used in the computation of a Question Score are determined.
The Mistakes Score accumulates for each question and is determined automatically whenever a student interacts with the system. It is a combination of two separate observations:
The count, category and the score for each mistake are recorded.
For each mistake the following occurs:
Mistakes categories could include at least the following:
Only applicable in lessons where there are multiple different strategies supported for answering. When available this observation can be an important indicator of student achievement. Examples of different strategies are:
How quickly a student responds once it is clear what they have to do. Can be an indicator of either understanding or automaticity in many cases. Overall time to answer a question, or series of questions, is less indicative, however, than analysis of the timing of the various mental and physical phases a student may go through to respond:
By analyzing these three timings individually, as well as their summation, the system is able to make much more accurate assessments of a student's particular skills and weaknesses. For example, two students may have similar overall response times. However the first starts to respond rapidly (a short Think Time), but takes some time to complete their answer, which involves manipulating a series of objects on-screen (a long Act Time). The other takes much longer to begin responding, but completes the on-screen manipulation much faster. Neither of these responses, if taken in isolation, are necessarily strong indicators of physical or mental aptitude. However, by recording these observations over time, the system may determine that one student consistently takes more time when completing tasks that require fine motor skills (or, perhaps, properly operating computer peripherals such as a mouse) and may adjust their Adaptation Profile and score calculations appropriately.
In general, Responsiveness Scores will be calculated as follows:
Responsiveness Score is determined by comparing how long the student took to answer in relation to those, or potentially a specific subset of those, who have previously used the same strategy for either this specific question, or similar questions within this lesson. Students who have response times outside a specified range—for example a Standard Deviation Multiple from the mean—will be classified as responding outside of expectations.
As with other areas of the invention, when comparing a specific student's performance—in this case responsiveness—the student may be compared against all students who have done this lesson previously or against a specific subset of students. Examples of possible subsets include students:
An example of how the Responsiveness Score could be calculated is as follows:
The Total Response Time—the actual time in seconds the student took to respond—is determined by summation of the Think, Preparation and Act times. The previously calculated Standard Deviation and Mean values for this lesson:question combination (and ideally this lesson:question:strategy combination) are used to calculate how this specific student's response compares with the responses of the appropriate collection of previous students. Values that exceed the fast and slow thresholds set in the lesson (possibly as standard deviation multiples) are used to calculate the
Responsiveness Score. If the value falls outside either threshold, calculate the positive (for faster than expected) or negative (for slower than expected) score to apply based upon the difference from the threshold.
The system will be seeded by obtaining timings from real students and is designed to not generate Responsiveness scores until a sufficient number of responses have been obtained. As lessons and the responses of the student populations change over time, so might the timing values and the thresholds. To optimize scoring of response times the system may automatically adjust the thresholds to ensure (for example) a certain percentage of students fall within the expected range.
Assistance is defined as something that could either help the student achieve the correct answer, or improve their score for this question. The Assistance Score is a combination of two factors:
Assistance Scores can be generated either directly from within the lesson, for example as part of an teacher authored adaptation, or from individual lesson components that have been configured to generate an Assistance Score when interacted with in a certain way. For example, a “flash card” tool might be configured to flip and show the front rather than the back of the card to the student when clicked upon. Each flip—and the associated duration the front of the card is shown—could be automatically recorded as assistance by the lesson, if it were so configured.
Each of the individual assessment axis scores can be further manipulated by a weighting that adjusts how much of an impact that score has on the calculation of the overall score for each question. In one embodiment the weightings could be supplied by a teacher as part of the lesson configuration and might range in value from 0 to 2.0. A weighting of 1.0 would cause, for example, a Mistakes Score to have a “standard” impact on the final score. A value of 2.0 would cause it to have twice the impact and a score of 0 would cause the system to ignore all mistakes when calculating the final score.
In another embodiment each weighting might be made up of the combination of both a teacher supplied value in the lesson configuration, as described above, and a system calculated value that is used to adjust that value and fine tune the score calculation. E.g.
W=W
T
*A
S
+A
W
WT=Teacher supplied weighting
AS=System calculated teacher weighting adjustment
AW=System calculated weighting
In one embodiment the system generated adjustment value might be computed by comparing the final scores for students who do two or more lessons that assess the same micro-objectives. It might be determined that the scores for the lessons can be made to be more equal, and to more accurately represent a student's levels of skill, if one or more of the assessment axis score weightings are adjusted automatically by the system.
It should be noted that an embodiment that calculates and applies a Weighting Adjustment may be separate to that described for calculating and applying Effectiveness Factors for a lesson. Weighting Adjustments can be used to affect the scores of specific sub-groups of students within a lesson. For example, only those who make mistakes, or need assistance, since these are separately weighted. Those students who do not fall within that group will not have their scores affected. Effectiveness Factors, however, are related to the lesson itself and apply to all scores generated within that lesson. For example, in one embodiment an Effectiveness Factor of 70 would lower the score those for students who make no mistakes as well as those who make many.
Within a lesson a student's performance on each micro-objective is nominally scored between 0 and 100, though this range can be affected by the difficulty of individual questions. This score may not be an accurate indicator of the student's level or skill or a good predictor of future performance in lessons assessing similar micro-objectives. Therefore, once outside the scope of a lesson, each micro-objective score is potentially further scaled by a teacher-supplied Completeness Factor for that micro-objective and a one of a potential set of system generated Effectiveness Factors.
In one embodiment, the final micro-objective score that is usable in a lesson-independent way could be calculated as follows:
S=S
LD
*CF/100*EF/100
S=Lesson Independent Micro-objective score
SLD=Lesson Dependent (raw) Micro-objective score from the lesson
EF=Effectiveness Factor