Priority is claimed from Australian patent application No. 2020903176, filed 4 Sep. 2020, the disclosure of which is hereby incorporated in its entirety by reference.
The present disclosure relates to methods and systems for automatically determining quality ratings for digital resources, including but not limited to electronic learning resources, for example resources that are used in the delivery of educational courses to students.
Any references to methods, apparatus or documents of the prior art are not to be taken as constituting any evidence or admission that they formed, or form part of the common general knowledge.
The present invention will be described primarily in relation to digital learning resources such as learning materials in respect of a topic in an educational course, however it also finds application more broadly, including the following:
In the context of education, adaptive educational systems (AESs) [4] are information generating and processing systems that receive data about students, learning process, and learning products via electronic data networks. Prior art AESs are configured to provide an efficient, effective and customised learning experience for students by dynamically adapting learning content to suit students' individual abilities or preference. As an example, an AES may process data on the extent to which students' engagement with a resource leads to learning gains for the student population to thereby infer the quality of a learning resource.
It will be realized that given that there are often a very large number of learning resources available for any given educational course, it is highly time-consuming for instructors, e.g. lecturers and course facilitators to manually allocate a quality rating to each resource. Nevertheless, it is important that the quality of a learning resource for a particular educational course can be assessed and accurately allocated, otherwise students may spend valuable time studying a learning resource which is of low quality and which should not have been approved for use. Furthermore, it may be that the students themselves will create some of the learning resources. However, in that case, it is very time-consuming for experts such as lecturers, or other qualified instructors, to check the student authored learning resource and provide a quality rating in respect of the learning resource and constructive feedback to the student author.
In response to this problem researchers from a diverse range of fields (e.g., Learning at Scale (L@S), Artificial Intelligence in Education (AIED), Computer Supported Cooperative Work (CSCW), Human-Computer Interaction (HCI) and Educational Data Mining (EDM)) have explored the possibility of constructing processing systems that are specially configured to implement crowdsourcing approaches to support high-quality, learner-centred learning at scale. The use of processing systems that implement crowdsourcing in education, often referred to as learnersourcing, is defined as “a form of crowdsourcing in which learners collectively contribute novel content for future learners while engaging in a meaningful learning experience themselves” [16]. Recent progress in the field highlights the potential benefits of employing learnersourcing, and the rich data collected through it, towards addressing the challenges of delivering high quality learning at scale. In particular, with the increased enrolments in higher education, educational researchers and educators are beginning to use learnersourcing in novel ways to improve student learning and engagement [3,7,8,10,11,15,25-27].
However, the Inventors have found that processing systems that are configured to implement traditional reliability-based inference methods that have been demonstrated to work effectively in the context of other crowdsourcing systems may not work well in education.
It would be desirable if a solution could be provided that is at least capable of receiving one or more indications of quality in respect of learning resources from respective devices of a plurality of non-experts via a data network and processing those indications of quality to set quality ratings in respect of the learning resources.
According to a first aspect there is provided a method to associate quality ratings with each digital resource of a plurality of digital resources, the method comprising, in respect of each of the digital resources:
In an embodiment the method includes operating the at least one processor to classify the digital resource as an approved resource based upon the quality rating.
In an embodiment the method includes operating the at least one processor to classify the digital resource as an approved resource or as a rejected resource based upon the quality rating.
In an embodiment the method includes operating the at least one processor to transmit a message to a device of an author of the rejected resource, the message including the quality rating and one or more of the one or more indications of quality received at (a).
In an embodiment the one or more indications of quality include decision ratings (dij) provided by the non-experts (ui) in respect of the digital resource (qi)
In an embodiment the one or more indications of quality include comments (cij) provided by the non-experts (ui) in respect of the digital resource (qi).
In an embodiment the method includes operating the at least one processor to process the comments in respect of the digital resource to quantify the comments as indicating a degree of positive or negative sentiment toward the digital resource.
In an embodiment operating the at least one processor to process the comments to quantify the comments as indicating a degree of positive or negative sentiment toward the digital resource includes operating the at least one processor to apply a sentiment lexicon to the comments to compute sentiment scores.
In an embodiment the method includes operating the at least one processor to calculate a reliability indicator in respect of each non-expert indicating reliability of the indications of quality provided by the non-expert.
In an embodiment in (b),
operating at least one processor to process the one or more indications of quality from each of said respective non-expert devices to determine the draft quality rating and the level of confidence therefor includes:
In an embodiment the method includes operating the at least one processor to transmit the reliability indicators across the data network to respective non-expert devices of the non-experts for viewing by the non-experts.
In an embodiment the method includes calculating a reliability indicator for each non-expert comprises:
In an embodiment the heuristic procedure comprises:
where fijR computed as a height of a Gaussian function at value difij with centre 0 using
where hyper-parameters σ and δ are learned via cross-validation.
In an embodiment the heuristic procedure comprises:
where FN×ML is a function in which fijL is computed based on a logistic function
where the hyper-parameters c, a and k of the logistic function are learned via cross-validation.
In an embodiment the heuristic procedure comprises:
where fijA approximates alignment of rating dij and comment cij a user ui has provided for a resources qj.
In an embodiment the heuristic procedure includes determining the reliability indicators using a combination of two or more of each of the following three heuristic procedures:
where fijR is computed as a height of a Gaussian function at value difij with centre 0 using
where hyper-parameters σ and δ are learned via cross-validation; and/or
where FN×ML is a function in which fijL is computed based on a logistic function
where the hyper-parameters c, a and k of the logistic function are learned via cross-validation; and/or
where fijA approximates alignment of the rating dij and the comment cij a user ui has provided for a resources qj.
In an embodiment the method includes establishing data communications with respective devices (“expert devices”) of a number of experts via the data network.
In an embodiment the method includes requesting an expert of the number of experts to review a digital resource.
In an embodiment the method includes receiving a quality rating (“expert quality rating”) from the expert via an expert device of the expert in respect of the digital resource.
In an embodiment the method includes operating the at least one processor to set a quality rating in respect of the digital resource to the expert quality rating.
In an embodiment the method includes transmitting feedback on the digital resource received from the expert across the data network, to an author of the digital resource.
In an embodiment the method includes transmitting a request to the expert device for the expert to check indications of quality received from the non-expert devices for respective digital resources.
In an embodiment the method includes operating the at least one processor to adjust reliability ratings of non-experts based on the check by the expert of the indications of quality received from the non-expert devices.
In an embodiment the non-experts comprise students.
In an embodiment experts comprise instructors in an educational course.
In an embodiment the method includes providing the digital resources comprising learning resources to the students.
The digital resource may comprise a piece of assessment in the educational course.
The digital resource may comprise a manuscript for submission to a journal The non-experts may comprise academic reviewers The experts may comprise meta reviewers or editors of the journal.
The digital resource may comprise software code such as source code or a script. The non-expert may comprise a junior engineer. The expert may comprise a senior engineer or team leader.
The digital resource may comprise an electronic document, for example a web page, made in a crowdsourcing environment such as Wikipedia. The non-expert may comprise a regular user. The expert may comprise moderators of groups of the crowdsourcing environment.
In an embodiment the method includes operating the at least one processor to process the digital resources to remove authorship data therefrom prior to providing them to the non-expert.
In another aspect there is provided a system for associating quality ratings with each digital resource of a plurality of digital resources, the system comprising:
In an embodiment the rating generator of the system is further configured to perform one or more of each of the embodiments of the previously mentioned method.
In a further aspect there is provided a rating generator assembly for associating quality ratings with each digital resource of a plurality of digital resources, the rating generator assembly comprising:
In an embodiment the rating generator is further configured to perform one or more of each of the embodiments of the previously mentioned method.
According to another aspect of the present invention there is provided a method to associate quality ratings with each digital resource of a plurality of digital resources the method comprising receiving one or more indications of quality of the digital resource from respective devices (“non-expert devices”) of a plurality of non-experts via a data network and setting the quality rating taking into account the received indications of quality.
Preferred features, embodiments and variations of the invention may be discerned from the following Detailed Description which provides sufficient information for those skilled in the art to perform the invention. The Detailed Description is not to be regarded as limiting the scope of the preceding Summary in any way. The Detailed Description mentions features that are preferable but which the skilled addressee will realize are not essential to all aspects and/or embodiments of the invention. The Detailed Description will refer to a number of drawings as follows:
The rating system 1 comprises a rating generator assembly 30 which is comprised of a server 33 (shown in detail in
Database 72 is arranged to store learning resources 5-1, . . . ,5-M (QM={qi, . . . ,qM}) so that they can each be classed as non-moderated resources 72a, rejected resources 72b or approved resources 72c. Whilst database 72 is illustrated as a single database partitioned into areas 72a, 72b and 72c, it will be realized that many other functionally equivalent arrangements are possible. For example the database areas 72a, 72b, 72c could be implemented as respective discrete databases in respective separate data storage assemblies which may not be implemented within storage of rating generator assembly but instead may be situated remotely and accessed by rating generator assembly 30 across data network 31.
The data network 31 of rating system 1 may be the Internet or alternatively, it could be an internal data network, e.g. an intra-net in a large organization such as a University. The data network 31 places non-expert raters in the form of students (UN={u1, . . . ,uN}) 3-1, . . . ,3-N, via their respective devices 3a, . . . ,3N (“non-expert devices”) in data communication with the rating generator assembly 30. Similarly, the data network 31 also places experts in the form of Instructors 7-1, . . . ,7-L, via their respective devices 7a, . . . ,7L (“expert devices”) in data communication with the rating generator assembly 30.
As will be explained, during its operation the rating generator assembly performs a method to associate quality ratings with each digital resource. In the present example the digital resource is a learning resource of a plurality of learning resources in respect of a topic of an educational course.
Before describing the method further, an example of server 33 will be described with reference to
The main board 64 acts as an interface between CPUs 65 and secondary memory 75. The secondary memory 75 may comprise one or more optical or magnetic, or solid state, drives. The secondary memory 75 stores instructions for an operating system 69. The main board 64 also communicates with random access memory (RAM) 80 and read only memory (ROM) 73. The ROM 73 typically stores instructions for a startup routine, such as a Basic Input Output System (BIOS) or Unified Extensible Firmware Interface (UEFI) which the CPUs 65 access upon start up and which preps the CPUs 65 for loading of the operating system 69.
The main board 64 also includes an integrated graphics adapter for driving display 77. The main board 64 accesses a communications port, for example communications adapter 53, such as a LAN adaptor (network interface card) or a modem that places the server 33 in data communication with data network 31.
An operator 67 of server 33 interfaces with server 33 using keyboard 79, mouse 51 and display 77 or alternatively, and more usually, via a remote terminal across data network 31.
Subsequent to the BIOS or UEFI, and thence the operating system 69, booting up the server the operator 67 may operate the operating system 69 to load the rating program 70 to configure server 33 to thereby provide the rating generator assembly 30. The rating program 70 may be provided as tangible, non-transitory, machine-readable instructions 89 borne upon a computer- readable media such as optical disk 87 for reading by disk drive 82. Alternatively, rating program 70 might also be downloaded via port 53 from a remote data source such as a cloud-based data storage repository.
The secondary memory 75, is an electronic memory typically implemented by a magnetic or non-volatile solid-state data drive and stores the operating system 69. For example, Microsoft Windows Server, and Linux Ubuntu Server are two examples of such an operating system.
The secondary memory 75 also includes the rating program 70, being a server-side program according to a preferred embodiment of the present invention. The rating program 70 is comprised of machine-readable instructions for execution by the one or more CPUs 65. The secondary storage bears the machine-readable instructions. Rating program 70 may be programmed using one or more programming languages such as PHP, JavaScript, Java, and Python. The rating program 70 implements a data source in the form of the database 72 that is also stored in the secondary memory 75, or at another location accessible to the server 33, for example via the data network 31. The database 72 stores learning resources 5-1, . . . ,5-M so that they are identifiable as non-moderated resources 72a, rejected resources 72b and approved resources 72c. As previously alluded to, in other embodiments separate databases may be used to respectively store one or more of the non-moderated, rejected and approved resources.
During an initial phase of operation of the server 33 the one or more CPUs 65 load the operating system 69 and then load the rating program 70 to thereby provide, by means of the server 33 in combination with the rating program 70, the rating generator assembly 30.
In use, the server 33 is operated by the administrator 67 who is able to monitor activity logs and perform various housekeeping functions from time to time in order to keep the server 33 operating optimally.
It will be realized that server 33 is simply one example of an environment for executing rating program 70. Other suitable environments are also possible, for example the rating generator assembly 30 may be implemented by a virtual machine in a cloud computing environment in combination with the rating program 70. Dedicated machines which do not comprise specially programmed general-purpose hardware platforms, but which instead include a plurality of dedicated circuit modules to implement the various functionalities of the method are also possible.
Methods that are implemented by the rating generator assembly 30 to process the student decision ratings and comments in respect of the learning resources will be described in the following sections of this specification. These methods are coded as machine readable-instructions which comprise the rating program 70 and which are implemented by the CPUs 65 of the server 33.
Table 1 provides a summary of the notation used to describe various procedures of a method according to an embodiment of the invention that is coded into the rating program 70 of the rating generator assembly 30 in the presently described example.
i
With reference to
In a first embodiment the rating generator assembly 30 is configured to perform a method to associate quality ratings with each digital resource, wherein the digital resource may be a learning resource of a plurality of learning resources, e.g. resources 5-1, . . . ,5-M, (QM={q1 . . . qM}) in respect of a topic of an educational course. The method comprises, in respect of each ofthe learning resources, receiving one or more indications of quality, for example in the form of decision ratings dij and comments cij, in respect of the learning resource q1 from respective devices (“non-expert devices” e.g. 3a, . . . ,3N) of a plurality of non-experts, for example students (UN={u1, . . . ,uN}) 3-1, . . . ,3-N via a data network 31. The method involves operating at least one processor, e.g. CPU(s) 65 of rating generator assembly 30 to process the one or more indications of quality from each of the respective non-expert devices 3a, . . . ,3N to determine a draft quality rating {circumflex over (r)}i and an associated level of confidence or “confidence value” of that draft quality rating. The method includes repeatedly receiving indications of quality from further of the non-expert devices and updating the draft quality rating and its associated level of confidence until the associated level of confidence meets a required confidence level. Once the required confidence level has been met the rating generator assembly sets the quality rating to the draft quality rating having the associated level of confidence meeting the required confidence level. The method of this first embodiment is reflected in boxes 102 to 113 of the flowchart of the preferred embodiment that is set out in
In the preferred embodiment of the invention that will be described with reference to the flowchart of
Prior to discussing the preferred embodiment with reference to the entire flowchart of
Mean. A simple solution is to use mean aggregation, where
There are two main drawbacks to using mean aggregation: (1) it is strongly affected by outliers and (2) it assumes that the contribution of each student has the same quality, whereas in reality, students' academic ability and reliability may vary quite significantly across a cohort.
Median. An alternative simple solution is to use {circumflex over (r)}j=Median(u1, . . . uk). A benefit of using median is that it is not strongly affected by outliers; however, similar to mean aggregate, it assumes that the contribution of each student has the same quality, which is a strong and inaccurate assumption.
User Bias. Some students may consistently underestimate (or overestimate) the quality of resources and it is desirable to address that. We introduce the notation of BN, where bi shows the bias of user ui in rating. Introducing a bias parameter has been demonstrated to be an effective way of handling user bias in different domains such as recommender systems and crowd consensus approaches [17]. We first compute
as the average decision rating across all users. The bias term for user ui is computed as bi=
Students within a cohort can have a large range of academic abilities. The one-dimensional array WN, is used where wi infers the reliability of a user so that more reliable students can have a larger contribution (i.e. “weight”) towards the computation of the final decision. Many methods have been introduced in the literature for computing reliability of users [30]. The problems of inferring the reliability of users WM and quality of resources RM can be seen as solving a “chicken-and-egg” problem where inferring one set of parameters depends on the other. If the true reliability of students WM were known, then an optimal weighting of their decisions could be used to estimate RM. Similarly, if the true quality of resources RM were known, then the reliability of each student WN could be estimated. In the absence of ground truth for either, the Inventors have conceived of three heuristic methods (which make use of equations (1) to (3) in the following), that may be employed in some embodiments whereby students can view updates to their reliability score. In each of the heuristic methods:
The methods of computing {circumflex over (r)}j and updating w1, . . . wk in each of the three methods will now be discussed.
Rating. In this method, the current ratings of the users and their given decisions are utilised for computing the quality of the resources and reliabilities. In this method, {circumflex over (r)}j and wi are computed using Formula 1 as follows:
where FN×MR is a function in which fijR determines the ‘goodness’ of dij based on {circumflex over (r)}j using the distance between the two difij=|dij−{circumflex over (r)}j|. Formally, fijR is computed as the height of a Gaussian function at value difij with centre 0 using
where the hyper-parameters σ and δ can be learned via cross-validation. Informally, fijR provides a large positive value (reward) in cases where difij is small and it provides a large negative value (punishment) in cases where difij is large.
Length of Comment. The reliability of a user decision in the previous scenario relies on the numeric ratings provided for a resource and it does not take into account how much effort was applied by a user in the evaluation of a resource. In this method, the current ratings, as well as decisions and comments of users, are utilised for computing the quality of the resources and updating reliabilities. The notation of LCN×M, is used where lcij shows the length of comments (i.e., number of words) provided by user ui on resource qj. {circumflex over (r)}j and wi are computed using Formula 2 as follows:
where FN×ML is a function in which fijL approximates the ‘effort’ of ui in answering qj based on the length of comment lcij. Formally, fijL is computed based on the logistic function
where the hyper-parameters c, a and k of the logistic function can be learned via cross-validation. Informally, fijL rewards students that have provided a longer explanation for their rating and punishes students that have provided a shorter explanation for their rating.
Rating-Comment Alignment. The previous two reliability-based models take into account the similarity of the students' numeric rating with their peers and the amount of effort they have spent on moderation by the length of their comments. Here, the alignment between the ratings and comments provided by a user are considered. In this method, {circumflex over (r)}j and wi are computed using Formula 3 as follows:
Where FN×MA is a function where fijA approximates the alignment of the rating dij and the comment cij a user ui has provided for a resources qj. A sentiment analysis tool that assesses the linguistic features in the comments provided by the students on each resource, is used to classify the words in terms of emotions into positive, negative and neutral. The Jockers-Rinker sentiment lexicon provided in the SentimentR package is applied here to compute a sentiment score between −1 to 1 with 0.1 interval which indicates a degree of sentiment present in the comments. This package assigns polarity to words in strings with valence shifters [21,18]. For example, it would recognize this sample comment “This question is Not useful for this course” as negative rather than indicating the word “useful” as positive.
Combining Reliability functions. Any combination of the presented three reliability functions can also be considered. For example, Formula 4 uses all three of the rating, length of comment and rating comment alignment methods for reliability.
Referring now to
Prior to performing the method the rating generator assembly 30 establishes data communication with each of the students 3-1, . . . ,3-N and Instructors, 7-1, . . . ,7L via data network 31 for example by serving webpages composed of e.g. HTML, CSS and JavaScript to their devices 3a, . . . ,3N and 7a, . . . ,7L with http or https protocols for rendering on suitable web-browsers running on each of the devices (as depicted in
At box 100 rating generator assembly 30 receives a learning resource, e.g. learning resource qk via network 31. The learning resource qk may have been generated by one of the students (UN={u1, . . . ,uN}) 3-1, . . . ,3-N or by one of Instructors 7-1, . . . ,7-L.
At decision box 101, if rating generator assembly 30 determines (for example by meta-data associated with the learner resource, such as the sender's identity and position in the educational facility) that qk was sent by one of the students then at box 102 the rating generator assembly 30 stores the learning resource qk in the non-moderated resources area 72a of database 72. Alternatively, if at decision box 101 rating generator assembly 30 determines that qk was produced by one of the instructors 7-1, . . . ,7-L then at box 125 (
At decision box 103 the rating generator assembly 30 may take either of two paths. It may decide to proceed along a first path to box 105, where a student moderated procedure commences, or along a second path to box 127 where one or more of the Instructors 7-1, . . . ,7-L engage with the rating generator assembly to assist with ensuring that the learning resource quality ratings and student reliability ratings are being properly allocated. At box 103 the server checks the role of a user requesting to moderate, i.e. to provide one or more indications of quality, such as a decision rating and/or a comment in respect of a learning resource, to determine whether they are an instructor or a student.
At box 105, where the user requesting to moderate (i.e. available to moderate), is a student then the rating generator assembly 30 selects a non-moderated resource qj from non-moderated resources area 72a of the database 72. The rating generator assembly 30 transmits the non-moderated resource qj to one or more of the available students ui via the data network 31 with a request for the students to evaluate the resource qj. It is highly preferable that the rating generator assembly 30 is configured to provide the resource to the student without any identification of the author of the documents. This is so that the student moderation, i.e. allocation of a rating to the document by the student, is performed blindly, i.e. without there being any possibility of the student being influenced by prior knowledge of the author.
At box 109 the rating generator assembly 30 computes a draft quality rating {circumflex over (r)}j in respect of the learning resource qj based on the received decision rating di,j and comment ci,j and an associated confidence value for the quality rating {circumflex over (r)}j .
At box 111, if the confidence value is below a threshold valuethreshold, then control diverts back to box 102 and the procedure through boxes 105 to box 109 repeats until a draft quality rating {circumflex over (r)}j is determined for a non-moderated learning resource qj with a confidence value meeting a desirable required confidence level. In that case, at box 111 control proceeds to box 113 and the quality rating is set to the value of the final draft quality rating. An associated confidence value is also calculated. For example, if n moderators have reviewed a resource
The rating generator assembly 30 calculates the confidence value as an aggregated sum, i.e. confidence value=w1*sc1+w2*sc2 . . . wn*scn and compares that aggregated sum to a threshold value.
The confidence value increases as more non-expert moderators provide a quality rating for the digital resource being rated.
In terms of typical numbers, reliability values for non-expert moderators are 700<wi<1300 and self-confidence ratings are 0<sci<1. Two methods that may be used in relation to the confidence value and the threshold value are:
2. Instructors can set min and max number of moderations required for a resource (default values of min=3 and max=5 have been found to be workable.) k is then set to k=(min+max)/2 in the formula given in method 1. However, we also add an additional constraint on the lower and upper bounds values of the number of moderators when we make a decision. This second method has been found to provide a better estimate of how many moderations are needed to get n resources reviewed.
If the computed confidence value associated with the draft quality rating at box 109 exceeds the threshold, then control proceeds to box 113. Otherwise, control loops back to box 102 to obtain further moderations, i.e. by further non-expert moderators (students) in respect of the same digital resource until the associated confidence value at box 109 is exceeded. The self confidence values are directly input by the non-expert moderators into their devices 3, for example by means of data entry input field 204 of
At box 113 the rating generator assembly 30 also updates the reliability ratings w1, . . . ,wn of the students involved in arriving at the final quality rating {circumflex over (r)}j for the learning resource qj. For example, at box 113 the rating generator assembly 30 may determine the reliability ratings wi of the students ui according to one or more of formulae (1) to (4) that have been previously discussed.
At box 115 the rating generator assembly 30 transmits the rating {circumflex over (r)}j that it has allocated to the resource qj and any changes to the reliability ratings of the students involved, back to the devices 3a, . . . ,3N of the students, said students being an example of non-expert moderators. In a further step, subsequent to box 115 the moderators may be asked to take a look at the reviews from the other moderators and determine whether or not they agree with the decision that has been made. If they do not agree with the decision, disagreement is used to increase the priority of the resource being spot-checked by experts.
Rating generator assembly 30 is preferably configured to implement an explainable rating system to simultaneously infer the reliability of student moderators and the quality of the resources. In one embodiment the method includes calculating values for the reliability and quality ratings in accordance with formulas 1) to 4) as previously discussed. The reliability of all of the student moderators may be initially set to an initial value of α. The quality of a resource as a weighted average of the decision ratings provided by student moderators and their ratings are then calculated. Preferably the calculation affords a greater weight to indications of quality from non-experts with a higher reliability indicator and a lower weight to indications of quality from non-experts with a lower reliability indicator.
Learning resources that are perceived as effective may be classified as such, for example by adding them to the repository of approved resources, e.g. area 72c of database 72. For example, a learning resource may be deemed to be “effective” taking into account alignment with the course content, correctness and clarity of the resource, appropriateness of the difficulty level for the course it is being used for and whether or not it promotes critical thinking The ratings of the student moderators may then be updated based on the “goodness” of their decision rating as previously discussed. Feedback about the moderation process may then be transmitted, via the data network, to the author of the learning resource and to the moderators.
At decision box 117, if the quality rating that was determined box 109 with above threshold confidence value was a quality rating that is below indicating the resource qj to be an approved resource, then the rating generator assembly 30 proceeds to box 119 and moves the resource qj from the non-moderated resources class 72a to the rejected resources class 72b in database 72. Subsequently, at box 121 the rating generator assembly 30 sends a message to the student that created the resource encouraging them to revise and resubmit the learning resource based on feedback that has been transmitted to them, e.g. the comments, that the resource received from students at box 107.
Alternatively, if at decision box 117 a decision is made to approve the learning resource qj then control proceeds to box 123. At box 123 the rating generator assembly 30 sends the student that authored the resource a message encouraging the student to update the resource based on feedback, e.g. the comments that the resourced received from students at box 107. At box 125, rating generator assembly 30 then moves the resource qj from the non-moderated resources class 72a to the approved resources class 72c of database 72.
At box 137 the rating generator assembly 30 determines the role of the user, e.g. “student” or “instructor”. For students the purpose of their engagement with approved resources may be to obtain an adaptive recommendation. For instructors it may be to check how they can best utilize their time with spot-checking.
At box 139 the rating generator assembly 30 serves a webpage to students, e.g. webpage 209 on device 3i as shown in
Returning to decision box 103, if at decision box 103 the rating generator assembly 30 finds that one of the instructors, e.g. instructor 7-i, of the instructors 7-1, . . . ,7-L is available, then at box 127 the rating generator assembly 30 identifies a “best” activity, such as a high priority activity, for the instructor 7-i to perform.
At decision box 129, if the best activity that was identified at box 127 is to spot-check the learning resources q1, . . . ,qm, for example to ensure that an approved resource should indeed have been approved, or a rejected resource should indeed have been rejected, then the procedure progresses to box 131. At box 131 the rating generator assembly 30 provides a resource qs to the instructor 7-i for the instructor to spot-check.
The instructor 7-i returns comment ci,r and a decision rating dr in respect of the resource qs which the rating generator assembly 30 then uses at boxes 113 and 115 to form an expert quality rating to update the quality rating of qs and to update the reliability rating of one or more of the students involved in authoring and/or prior quality rating of the resource qs. Based on the spot-checking at box 131, the rating generator assembly 30 may detect students that have made poor learning resource contributions or are misbehaving in the system. In that case, the rating generator assembly 30 serves a webpage that is rendered as screen 213 on the administrator device, i.e. display 77 as shown in
If at decision box 129, the best activity that was identified at box 127 is to check the quality of a learning resource contributed by a student ui then at box 133 the rating generator assembly 30 provides a resource qp to an available instructor, e.g. instructor 7-L. The instructor 7-L then reviews the learning resource qp and sends a decision rating dp and comment cL,p back to the rating generator assembly 30. The rating generator assembly 30 then updates the reliability rating wi of student ui based on the comment cL,p and decision rating dp in respect of the learning resource qp that was created by student ui and provides feedback to the student ui advising of the new quality rating, reliability rating and of the instructor's comment. The feedback assists student ui to 25 improve the initial quality of learning resources that will be generated by the student in the future.
At box 135 the rating generator assembly 30 updates the reliability of student u and transmits feedback to them based on the outcome of the review, if needed
At any time the administrator 67 can request information from the rating generator assembly regarding quality rating and reliability ratings, for example as shown in screen 214 of administrator device 77 in
It will be realised that the exemplary embodiment that has been described is only one example of an implementation. For example, in other embodiments fewer features may be present, as previously discussed in relation to the first embodiment, or more features may be present. For example, embodiments of the method may assess quality and reliability of the moderators by configuring the rating generator assembly 30 to take into account factors including one or more of the following:
The disclosures of each of the following documents are hereby incorporated herein by reference.
ICEL 2018 13th International Conference on e-Learning. p. 184. Academic Conferences and publishing limited (2018)
In compliance with the statute, the invention has been described in language more or less specific to structural or methodical features. The term “comprises” and its variations, such as “comprising” and “comprised of” is used throughout in an inclusive sense and not to the exclusion of any additional features. It is to be understood that the invention is not limited to specific features shown or described since the means herein described herein comprises preferred forms of putting the invention into effect. The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims appropriately interpreted by those skilled in the art.
Throughout the specification and claims (if present), unless the context requires otherwise, the term “substantially” or “about” will be understood to not be limited to the value for the range qualified by the terms.
Any embodiment of the invention is meant to be illustrative only and is not meant to be limiting to the invention. Therefore, it should be appreciated that various other changes and modifications can be made to any embodiment described without departing from the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2020903176 | Sep 2020 | AU | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/AU2021/051025 | 9/3/2021 | WO |