This disclosure relates to the field of systems and methods configured to allow learner analytics to be efficiently tracked even when a course hierarchy and/or structure are changed after the course has started.
The present invention provides systems and methods comprising one or more server hardware computing devices or client hardware computing devices, communicatively coupled to a network, and each comprising at least one processor executing specific computer-executable instructions within a memory.
An embodiment of the present invention allows analytics on measurements to work with an original table of contents (TOC) and an updated TOC. An electronic education platform may generate the original TOC for a course. The original TOC may comprise a first original assignment and a second original assignment. The first original assignment may comprise a first plurality of learning resources and the second original assignment may comprise a second plurality of learning resources.
A learner engagement engine may measure a plurality of student engagement activities, such as reading time, for the first plurality of learning resources and for the second plurality of learning resources. The learner engagement engine may aggregate the measurements in the plurality of student engagement activities that are in the first original assignment to determine a total amount of time spent on the first original assignment. As an example, the reading time for a particular student may be 1.1, 1.2 and 1.3 hours reading corresponding chapters 1, 2 and 3 as the first original assignment and 1.4 hours for reading chapter 4 as the second original assignment. The aggregated measurements may be graphically displayed to the teacher. The teacher may, at any desired time, update the TOC. As an example, the teacher may wish to move reading chapter 3 from the first original assignment to the second original assignment, thereby creating a first updated assignment of reading chapters 1 and 2 and a second updated assignment of reading chapters 3 and 4. Of course, other types of changes may be made to the assignments, such as adding and deleting other learning resources from the assignments.
The learner engagement engine may aggregate the measurements in the plurality of student engagement activities that are in the first updated assignment to determine a total amount of time spent on the first updated assignment. The learner engagement engine may also aggregate the measurements in the plurality of student engagement activities that are in the second updated assignment to determine a total amount of time spent on the second updated assignment. It should be noted that measurements of student activities measured while the original TOC was active may be used to calculate various desired analytics for the updated TOC. The learner engagement engine may graphically display the total amount of time spent on the first updated assignment and the total amount of time spent on the second updated assignment, even though the measurements were taken before the TOC was updated. The above features and advantages of the present invention will be better understood from the following detailed description taken in conjunction with the accompanying drawings.
The present inventions will now be discussed in detail with regard to the attached drawing figures that were briefly described above. In the following description, numerous specific details are set forth illustrating the Applicant's best mode for practicing the invention and enabling one of ordinary skill in the art to make and use the invention. It will be obvious, however, to one skilled in the art that the present invention may be practiced without many of these specific details. In other instances, well-known machines, structures, and method steps have not been described in particular detail in order to avoid unnecessarily obscuring the present invention. Unless otherwise indicated, like parts and method steps are referred to with like reference numerals.
Research with instructors has suggested that students are not only challenged with the academic nature of learning (understanding knowledge) but often more importantly they are challenged by how to be most productive by developing consistent learning behaviors. Instruction and data associated with such learning behaviors, provided directly to the instructor may empower instructors to make a difference through discussion and intervention with regards to students' learning behaviors.
The disclosed embodiments include a learner engagement engine which creates a platform for the manner in which activity is tracked across dynamic content. For example the disclosed embodiments may create a platform, in near real time, for learning resources, structures, and/or use cases that are changed by instructors, so that learners activity analytics can be leveraged in the correct context within their learning experience. This approach may represent an improvement to current approaches, which focus on dedicated solutions (e.g., date, code, APIs) per product model. Instead the disclosed embodiments offer a micro-services approach where availability of activity analytics is across product model and across contexts (such as book structure and assignment structure) at the same time.
In the disclosed embodiments, when an instantiation of a product is first created, the product system will seed the initial structure of the content including any areas where the same content is mapped to multiple structures, as described in more detail below (e.g. Chapter Section 1.1 is mapped to: the Book, the Chapter, a given assignment, and 1 to many learning objectives). The relationship between the structures and the objects is unique per instantiation and will dictate the aggregations during runtime use cases. As the student interacts with a given learning resource (chapter, page, video, question, interactive, etc.) their activity is tracked individually on the given object, both point in time and temporally (view, load, unload, time spent, session). When an associated product (e.g., software calling from an API) makes a call to display various activity analytics in the user experience, the current state of the hierarchy and the relationship of the learning resources in the hierarchy dictate and calculate a value associated with given metrics. As a result, when an instructor changes an assignment structure after there has already been activity by the student or a curriculum designer changes a learning objective map after there has already been activity, the new structures will calculate activity analytics based on the new context.
The disclosed embodiments may be differentiated from existing technology because of its ability to re-use the same data aggregations, system, & APIs to support activity analytics where content is structured in multiple different hierarchies at the same time (e.g., reporting on activity analytics in book structure, assignment structure, and learning objective structure from single stream of student activity data).
In summary, the disclosed system was developed in part to address the continuous need for tracking activity analytics across a corpus of content (regardless of product) and therefore was architected in a manner that it treats all content as objects that can fit into one-to-many hierarchical structures. This generic approach to content as objects means that the approach can be used for any digital content-based product. For example, using the disclosed embodiments, instructors may view how many students in a class have done less than 35% of the questions that have been assigned within the last 5 assignments, allowing the instructor to tailor their intervention with those students to completing their homework. In another example, the instructor may understand the number of pages a student has viewed out of the total pages of content that have been assigned, thereby allowing the instructor to make suggestions around material usage when intervening with any given student. In a third example, an instructor may view at a glance which assignable units (assessments) in an assignment are flagged with low activity and can quickly get to those specific students that have not done the work in order to improve the students' performance.
Server 102, client 106, and any other disclosed devices may be communicatively coupled via one or more communication networks 120. Communication network 120 may be any type of network known in the art supporting data communications. As non-limiting examples, network 120 may be a local area network (LAN; e.g., Ethernet, Token-Ring, etc.), a wide-area network (e.g., the Internet), an infrared or wireless network, a public switched telephone networks (PSTNs), a virtual network, etc. Network 120 may use any available protocols, such as (e.g., transmission control protocol/Internet protocol (TCP/IP), systems network architecture (SNA), Internet packet exchange (IPX), Secure Sockets Layer (SSL), Transport Layer Security (TLS), Hypertext Transfer Protocol (HTTP), Secure Hypertext Transfer Protocol (HTTPS), Institute of Electrical and Electronics (IEEE) 802.11 protocol suite or other wireless protocols, and the like.
The embodiments shown in
As shown in
In some embodiments, the security and integration components 108 may implement one or more web services (e.g., cross-domain and/or cross-platform web services) within the content distribution network 100, and may be developed for enterprise use in accordance with various web service standards (e.g., the Web Service Interoperability (WS-I) guidelines). For example, some web services may provide secure connections, authentication, and/or confidentiality throughout the network using technologies such as SSL, TLS, HTTP, HTTPS, WS-Security standard (providing secure SOAP messages using XML encryption), etc. In other examples, the security and integration components 108 may include specialized hardware, network appliances, and the like (e.g., hardware-accelerated SSL and HTTPS), possibly installed and configured between servers 102 and other network components, for providing secure web services, thereby allowing any external devices to communicate directly with the specialized hardware, network appliances, etc.
Computing environment 100 also may include one or more data stores 110, possibly including and/or residing on one or more back-end servers 112, operating in one or more data centers in one or more physical locations, and communicating with one or more other devices within one or more networks 120. In some cases, one or more data stores 110 may reside on a non-transitory storage medium within the server 102. In certain embodiments, data stores 110 and back-end servers 112 may reside in a storage-area network (SAN). Access to the data stores may be limited or denied based on the processes, user credentials, and/or devices attempting to interact with the data store.
With reference now to
One or more processing units 204 may be implemented as one or more integrated circuits (e.g., a conventional micro-processor or microcontroller), and controls the operation of computer system 200. These processors may include single core and/or multicore (e.g., quad core, hexa-core, octo-core, ten-core, etc.) processors and processor caches. These processors 204 may execute a variety of resident software processes embodied in program code, and may maintain multiple concurrently executing programs or processes. Processor(s) 204 may also include one or more specialized processors, (e.g., digital signal processors (DSPs), outboard, graphics application-specific, and/or other processors).
Bus subsystem 202 provides a mechanism for intended communication between the various components and subsystems of computer system 200. Although bus subsystem 202 is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple buses. Bus subsystem 202 may include a memory bus, memory controller, peripheral bus, and/or local bus using any of a variety of bus architectures (e.g. Industry Standard Architecture (ISA), Micro Channel Architecture (MCA), Enhanced ISA (EISA), Video Electronics Standards Association (VESA), and/or Peripheral Component Interconnect (PCI) bus, possibly implemented as a Mezzanine bus manufactured to the IEEE P1386.1 standard).
I/O subsystem 226 may include device controllers 228 for one or more user interface input devices and/or user interface output devices, possibly integrated with the computer system 200 (e.g., integrated audio/video systems, and/or touchscreen displays), or may be separate peripheral devices which are attachable/detachable from the computer system 200. Input may include keyboard or mouse input, audio input (e.g., spoken commands), motion sensing, gesture recognition (e.g., eye gestures), etc. As non-limiting examples, input devices may include a keyboard, pointing devices (e.g., mouse, trackball, and associated input), touchpads, touch screens, scroll wheels, click wheels, dials, buttons, switches, keypad, audio input devices, voice command recognition systems, microphones, three dimensional (3D) mice, joysticks, pointing sticks, gamepads, graphic tablets, speakers, digital cameras, digital camcorders, portable media players, webcams, image scanners, fingerprint scanners, barcode readers, 3D scanners, 3D printers, laser rangefinders, eye gaze tracking devices, medical imaging input devices, MIDI keyboards, digital musical instruments, and the like.
In general, use of the term “output device” is intended to include all possible types of devices and mechanisms for outputting information from computer system 200 to a user or other computer. For example, output devices may include one or more display subsystems and/or display devices that visually convey text, graphics and audio/video information (e.g., cathode ray tube (CRT) displays, flat-panel devices, liquid crystal display (LCD) or plasma display devices, projection devices, touch screens, etc.), and/or non-visual displays such as audio output devices, etc. As non-limiting examples, output devices may include, indicator lights, monitors, printers, speakers, headphones, automotive navigation systems, plotters, voice output devices, modems, etc.
Computer system 200 may comprise one or more storage subsystems 210, comprising hardware and software components used for storing data and program instructions, such as system memory 218 and computer-readable storage media 216. System memory 218 and/or computer-readable storage media 216 may store program instructions that are loadable and executable on processor(s) 204. For example, system memory 218 may load and execute an operating system 224, program data 222, server applications, client applications 220, Internet browsers, mid-tier applications, etc. System memory 218 may further store data generated during execution of these instructions. System memory 218 may be stored in volatile memory (e.g., random access memory (RAM) 212, including static random access memory (SRAM) or dynamic random access memory (DRAM)). RAM 212 may contain data and/or program modules that are immediately accessible to and/or operated and executed by processing units 204. System memory 218 may also be stored in non-volatile storage drives 214 (e.g., read-only memory (ROM), flash memory, etc.) For example, a basic input/output system (BIOS), containing the basic routines that help to transfer information between elements within computer system 200 (e.g., during start-up) may typically be stored in the non-volatile storage drives 214. Storage subsystem 210 also may include one or more tangible computer-readable storage media 216 for storing the basic programming and data constructs that provide the functionality of some embodiments. For example, storage subsystem 210 may include software, programs, code modules, instructions, etc., that may be executed by a processor 204, in order to provide the functionality described herein. Data generated from the executed software, programs, code, modules, or instructions may be stored within a data storage repository within storage subsystem 210. Storage subsystem 210 may also include a computer-readable storage media reader connected to computer-readable storage media 216. Computer-readable storage media 216 may contain program code, or portions of program code. Together and, optionally, in combination with system memory 218, computer-readable storage media 216 may comprehensively represent remote, local, fixed, and/or removable storage devices plus storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information.
Computer-readable storage media 216 may include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information. This can include tangible computer-readable storage media such as RAM, ROM, electronically erasable programmable ROM (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disk (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible computer readable media. This can also include nontangible computer-readable media, such as data signals, data transmissions, or any other medium which can be used to transmit the desired information and which can be accessed by computer system 200.
By way of example, computer-readable storage media 216 may include a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and an optical disk drive that reads from or writes to a removable, nonvolatile optical disk such as a CD ROM, DVD, and Blu-Ray® disk, or other optical media. Computer-readable storage media 216 may include, but is not limited to, Zip® drives, flash memory cards, universal serial bus (USB) flash drives, secure digital (SD) cards, solid state drives or ROM, DVD disks, digital video tape, and the like or combinations thereof. The disk drives and their associated computer-readable media may provide non-volatile storage of computer-readable instructions, data structures, program modules, and other data for computer system 200.
Communications subsystem 232 may provide a communication interface from computer system 200 and external computing devices via one or more communication networks, including local area networks (LANs), wide area networks (WANs) (e.g., the Internet), and various wireless telecommunications networks. As illustrated in
In some embodiments, communications subsystem 232 may also receive input communication in the form of structured and/or unstructured data feeds, event streams, event updates, and the like, on behalf of one or more users who may use or access computer system 200. For example, communications subsystem 232 may be configured to receive data feeds in real-time from users of social networks and/or other communication services, web feeds such as Rich Site Summary (RSS) feeds, and/or real-time updates from one or more third party information sources (e.g., data aggregators). Additionally, communications subsystem 232 may be configured to receive data in the form of continuous data streams, which may include event streams of real-time events and/or event updates (e.g., sensor data applications, financial tickers, network performance measuring tools, clickstream analysis tools, automobile traffic monitoring, etc.). Communications subsystem 232 may output such structured and/or unstructured data feeds, event streams, event updates, and the like to one or more data stores that may be in communication with one or more streaming data source computers coupled to computer system 200. The various physical components of the communications subsystem 232 may be detachable components coupled to the computer system 200 via a computer network, a FireWire® bus, or the like, and/or may be physically integrated onto a motherboard of the computer system 200. Communications subsystem 232 also may be implemented in whole or in part by software.
Due to the ever-changing nature of computers and networks, the description of computer system 200 depicted in the figure is intended only as a specific example. Many other configurations having more or fewer components than the system depicted in the figure are possible.
The learner engagement engine may monitor and measure, by a learner engagement engine, a plurality of student engagement activities for a plurality of learning resources. For purposes of the present invention, engagement scoring refers to building an index of engagement scores, rooted in student success models, for both individual students as well as cohorts of student that tracks their level of engagement across multiple aggregations. As non-limiting examples, cohort contexts may include course section, cross course section, institution(s) and custom (e.g. student athletes, co-requisite participants, etc.) As non-limiting examples, academic contexts may include discipline, topic/concepts and learning objectives. As non-limiting examples, behavioral contexts may include activity patterns, learning resource type and stakes state.
Engagement scoring in context and in comparative may provide insights about learner behaviors such as how learner engagement varies from course to course; how learning engagement trends across the students full learning experience; how learning engagement may shift from particular academic areas of interest or proclivity to certain types of learning resources; temporal ranges that learners prefer to engage and how their score varies; and how engagement scores vary between importance of the work they are doing, such as high stakes or low stakes assessments.
Student behavior tracking feature vectors may be researched as part of a model. While any desired type of student engagement activity may be monitored and measured, as non-limiting examples, the time spent (average, median, total, comparative) with a learning resource (preferably the learning resource is defined at a very detailed or low level (such as a particular paragraph or image in a book), object views (total, min, max, comparative), time & view aggregations at variable contexts (e.g. book, chapter, section, learning objective, interactive, etc.), engagement weighting by activity type (reading vs. practice vs. quiz vs. test vs. interactive vs. social, etc.) and lead time—temporal distances between assigned and due contexts.
The present invention may be used to predict a level of engagement necessary to be successful in a given course. Predicting engagement may use successful outcomes planning. In other words, the invention may take a best ball approach to learn from successful student behaviors coupled with learning design to create personalized models of how often and what types of engagement activities learners should be employing in order to be successful in their academic journey. Thus, the present invention may be used to provide guidance to students based on the engagement activities of past successful students.
As a non-limiting example, the learner engagement engine may recommend study hours and days of the week based on historical trending and estimated work times to support learners owning their learning experience by planning ahead. As another non-limiting example, the learner engagement engine may transmit study session strategy recommendations to the teacher or the students to help learners chunk their time in a meaningful way. As another non-limiting example, the learner engagement engine may transmit a lead time analysis graphical representation to guide the teacher or the student on how soon before an assignment is due should the student start working on the assignment.
In some disclosed embodiments, a system administrator (programmer, instructor, etc.) may create a dedicated library of Software Development Kits (SDKs) that consumers may select to optimize their implementation. To accomplish this, the disclosed system may include one or more producer system software modules 310 configured to receive data input into the system. These producer system software modules 310 may include components configured to publish various activities generated from user interactions. Non-limiting examples may include a Java publishing software development kit (SDK), a REST API, a JavaScript SDK, etc. In some embodiments, each of these may include schema validation before execution. Some embodiments may include an input processing and messaging system 320, which may or may not be integrated, associated with, or used in conjunction with the producer system software modules 310. In some embodiments, the producer system software modules 310 may include an e-text/course connect publisher, which may publish, as non-limiting examples: learning resource messages; generic object relationship messages; course section to context messages; course section enrollment messages; and activity (e.g., UserLoadsContent/UserUnloadsContent, described in more detail below). In some embodiments, the published data may be queued, consumed, published, persisted, read, and processed by a learner engagement engine 330, and read for display on the learner engagement engine analytics dashboard, as described below. In some embodiments, a User Interface (UI or GUI) Document Object Model (DOM) events within the UI may be used to capture the user engagement by logging or otherwise recording DOM events input into the GUI as they interact throughout the content, as described in more detail below. In some disclosed embodiments, the system may be configured to classify event data, such as start, stop, and/or null categories (events that are not indicative of user activity, such as dynamic content loads). In some embodiments, a system administrator, or logic within the system itself may classify events into engagement weighting categories (e.g., mouseover<scroll<click<etc.). In some embodiments, the system may capture, generate and populate within the log associated with the DOM, the date and/or timestamp of the user interaction, based on the established trigger categories. To capture the date timestamp and generate the log associated with the DOM, the system may be configured to start the log generation upon the user loading the page or other resource. In some embodiments, events may be logged upon a page or other resource being unloaded and at some temporal frequency. As a non-limiting example, described in more detail below, in some embodiments, the events associated with a resource may be logged every {n} seconds. To minimize loss in the event unload event may not reached due to system/idle degradation (e.g. every 30 seconds)
The system may be then be configured to read the log from the stream and/or store for batch processing. As a non-limiting example, if the instructions in the disclosed system log all events every 30 seconds then a queue would need to hold the events and process in parallel as they arrive in order to string together the activities within a specific page.
As described below, the disclosed system may use in stream aggregations to process student activity in near real time (NRT) so that engagement insights are as timely and accurate as possible for optimal real time decision making by students and instructors. The data from the stream described above may therefore be input into an input processing and messaging system 320, which may process the input data. As non-limiting examples, this input processing may include: data dedupe; simple time aggregation (e.g., aggregation of interaction with learning resources); content hierarchy time aggregation (e.g., aggregation for a book based on chapters in the book, sections within the chapters, content within the sections, etc., as described below); temporal/time period analytics (e.g., an analysis of user interactions, broken down by resources, assignments, learning objectives, time spent, etc., and/or using/eliminating idle time within activities, as described below); and deeper engagement insights (e.g., engagement broken down by activities or users, analysis of groups such as classes, identification, alerts, and intervention for students that are not actively engaged, etc., as described below, etc.). The disclosed system may further pass the log through an activity engagement processor, possibly associated with, or possibly the engagement engine 330.′
In some embodiments, the disclosed system may then pull in (e.g. select from data store 110) an engagement profile 300 that will allow a certain set of rules to be applied in calculating idle time. As a non-limiting example, if the timestamps indicate that there has been 20 seconds between start and stop, then system may then add 20 seconds to the user's time spent but if the start and stop indicates 45 seconds from one UI event to the next UI event and the profile indicates that a maximum of 30 seconds can be applied to time on task then the rule may dictate that only 30 seconds be added to the time spent and 15 seconds to the idle time spent. Idle time analysis will be described in greater detail herein. The system may then process multiple messages by stringing together UI event start/stop timestamps in order to ensure that the system is not missing any events (e.g., in a scenario where the system logs events every 30 seconds). The system may then process the completion and progression update events based on the definitions about productive engagement and calculating completion/progress from the UI activities strung together from logs, possibly as defined in engagement profiles 300. It should be noted that in some embodiments, completion DOM markers may need to be placed in the UI and the concept of scrolled into the view window/scrolled out of the view window trigger events sent to indicate a ‘reached end’ of targeted area achieved.
In some embodiments, the processed data may then be used as input into the engagement engine 330, which may include an engagement engine fact generator. The data generated by the engagement engine 330, as described herein, may be loaded into active memory for data logic to be performed on it, or may be stored (possibly via a batch layer) in long term memory, such as database 110. As additional data is generated and stored, the aggregated data may then be re-processed by the input processing components described above.
The processed and stored data may be utilized by additional services (e.g., additional software via API calls) within or associated with the system and additional composites and/or aggregations may be generated and stored in database 110. The composite/aggregated data may be displayed as part of a learning engagement engine analytics dashboard 340, which may include multiple UIs, as demonstrated and described below.
The disclosed system may run a utility to clear events that may have been opened, but not closed, referred to herein as a “pipeline pig” for user engagement events. In some embodiments, this utility may be run or otherwise executed hourly (or at a different interval frequency determined by the instructions in the software) to clean up any events that are still ‘open’. As each of the items in the queue are analyzed, if the learner has not had an subsequent event in the queue over the last hour (or other designated interval frequency) then the system may process any engagement time in the remaining event. In some embodiments, this interval frequency may be applied via a dynamic time spent based on a predicted estimate. In some embodiments, this interval frequency may be a personalized estimate. The system may then close out the events for the given learner, or other events in the queue.
In some embodiments, the system may use a session timeout listener to close out open events. In some embodiments, the system may query for session timeout events to be sent throughout the system and then correlate via learning resource ID what open load events should have an unload event dynamically generated.
In
Some embodiments may include a graphical representation of time spent by students. While any classifications may be used, in preferred embodiments the classifications of time spent by learning, assessment and non-academic related activities may be used. In some embodiments, the time spent against the same content in multiple contexts (learning objectives) may be aggregated and presented in a graphical representation.
The figures include multiple graphical representations that may be transmitted to the client device of the teacher to inform the teacher how engaged the students are with the learning resources of the course.
Some embodiments may include a graphical illustration of the time spent on learning objectives by student, further broken down by problem for a selected student (e.g., Gamgee, Sam).
The non-limiting example embodiments in
In these example embodiments, the engagement engine may generate and or analyze, from the engagement data, one or more features, and possibly intervention suggestions. As non-limiting examples, these features and/or intervention suggestions may include low activity reading, recommended study sessions, dynamic time on task, most engaging content, low activity assessing, student engagement scores, and progression on context.
Thus,
As further analyzed below, the breakdown of these relationships may be used to determine the user interaction for various objects, which may, for example, “roll up” to a parent object. As a non-limiting example, regarding Section 1.2, a user may interact with Image 1.2.1 for 10 seconds, and with Section 1.2 for a total of 30 seconds (i.e., 10 seconds interacting with Image 1.2.1, and 20 seconds interacting with parts of Section 1.2 other than Image 1.21). Regarding chapter 1, a user may interact with section 1.2 for 30 seconds, as described above, and may interact with Section 1.1 for 20 seconds, so that the user has interacted with chapter 1 for 50 seconds. As a result, in this example, the user may have interacted with Book 1 for a total of 50 seconds.
Some embodiments may include context and relationship messages. As non-limiting examples, these context and relationship messages may include details related to: the user; the course; enrollment; learning objectives; ⋅learning resources; entity relationships ⋅(i.e., generalized relationship records used for defining the hierarchy of any aggregations to be performed by the engagement engine 330). Some embodiments may include activity messages, including UserLoadsContent, and UserUnloadsContent messages and/or variables. In these embodiments, loading and unloading of learning resources may represents learners' engagement activity on the atomic resources. Each activity tracks the number of views, the time spent on the atomic resource and the navigational details.
In another example, learning resource R2 may be mapped to learning objective L1 and assignment A1, so that when learning resource R2 is unloaded, and the interaction time determined, the aggregator aggregates the interaction time and assigns/maps it to assignment A1 and learning objective L1. Thus, in a non-limiting example baseline use case, editorial teams may create a product structure with three different contexts (e.g., book, assignment, learning objectives). A learner may then engage in the atomic level objects that map differently in each of the three contexts. The learner engagement engine may aggregate activity differently for each context for product experience to consume.
Returning to
The original TOC, which may be a syllabus, provides a hierarchical structure for a course. The TOC comprises a plurality of assignments. Each assignment comprises one or more learning resources. Each learning resource may comprise three or more levels. As an example, a learning resource may be a book. The book may comprise the most generic level (name of the book), a middle tier level (a chapter title of the book) and a most detailed level (an image or a specific paragraph within the chapter of the book). The original TOC may be generated by any desired means by an instructor.
The hierarchical structure for a course may include, as non-limiting examples, a book, chapter, section, presentation, slide, image, question, question part, video, interactive, article, web page, dashboard, learning objectives and topic.
In a simplified example embodiment, the original TOC may comprise a first original assignment and a second original assignment. In this example, the first original assignment may comprise a first plurality of learning resources and the second original assignment may comprise a second plurality of learning resources.
In a preferred embodiment, the TOC may be updated at any time so that: 1) any learning resource in the first plurality of learning resources and the second plurality of learning resources may be deleted, 2) a new learning resource may be added to either the first plurality of learning resources or the second plurality of learning resources and/or 3) any learning resource (which may be referred to as a delta learning resource) may be interchanged between the first plurality of learning resources and the second plurality of learning resources.
A learning engagement engine may measure a plurality of student engagement activities for the first plurality of learning resources and the second plurality of learning resources. In a preferred embodiment, the learning resources are broken down several levels and measurements are for the most detailed level preselected for the learning resource. Measurements may be made by monitoring a student's activity online. Start times may be when a student downloads the material and stop time may be when a student changes to a different learning resources or disconnects from the learning resource. The dates that a student is engaged may be recorded. The number of times a student selects or interactives with learning resources may be monitored, recorded and saved. Comparisons between when a student started a project and when the project is due may also be measured and recorded. Average times for students to complete various portions of an assignment may also be calculated and graphically represented to the teacher. In some embodiments, idle time (when no student activity is detected) may be considered and removed from the various measurements as desired.
Of course, each learning resource will be unique and how many levels it is broken to may be selected based on what makes sense and will provide useful information to the instructor. Learning resources that are not broken down very far (such as only to a book level) will be easy for the system to track, but will not provide much information to the teacher regarding where in the book students may be having problems. Learning resources that are broken down too far (such as to a word level in a book) would be very difficult to track and would likely provide a lot of useless information to the teacher. Thus, depending on the course, students and learning resources, it will typically be desirable to break a book down to either a chapter level or, if the chapters have further breaks (such as perhaps images or questions), down to the breaks within a chapter.
Thus, if the learning resource is broken down to an image or paragraph level, then measurements are taken at an image or paragraph level by the learner engagement engine. If the learning resource is only broken down to a chapter level, then measurements are taken at the chapter level. If the learning resource is only broken down to a book level (typically the least desirable option mentioned), then measurements are taken at the book level.
The student engagement activities may be any desired activity that is wished to be measured by a student while engaged with a learning resource. As non-limiting examples, the student engagement activity may be time on task, resource views, last activities, comparatives, content usage, progression, learning sessions, engagement index, rankings, comprehension, objective time, focus, idle time, estimations and planning.
In an example embodiment, the learner engagement engine may aggregate the measurements in the plurality of student engagement activities that are in the first original assignment to determine a total amount of time spent on the first original assignment. Knowing how long each student was engaged with the learning resources in the first original assignment may be desirable for the teacher.
As an example, the first original assignment may be reading chapters 1, 2 and 3 in the book the Hobbit and the second original assignment may be reading chapter 4. If a student spent 1.1 hour reading chapter 1, 1.2 hours reading chapter 2 and 1.3 hours reading chapter 3, then the student would have spent 3.6 hours engaged on the first original assignment.
The learner engagement engine may also aggregate the measurements in the plurality of student engagement activities that are in the second original assignment. If the student spent 1.4 hours reading chapter 4, then the student would have spent 1.4 hours engaged with the second original assignment.
The learner engagement engine may graphically display any desired analytic that has been measured and determined. In the current example, the learner engagement engine may display the amount of time each student spent on the first original assignment and the amount of time each student spent on the second original assignment. In the above example, the student spent 3.6 hours on the first original assignment and 1.4 hours on the second original assignment.
As an example, the teacher may notice that an average time spent on the first assignment is significantly more than an average time spent on the second assignment (assuming the other students are similar to our example student). The teacher may desire that the average time spent for each assignment is more uniform.
As a specific example, the teacher using the electronic education platform may update the TOC so that a first updated assignment comprises reading chapters 1 and 2 in the book the Hobbit and a second updated assignment comprises reading chapters 3 and 4 in the book the Hobbit. Thus, in this example, chapter 3 (which may be referred to as the delta learning resource) was moved from the first original assignment to the second original assignment, thereby creating the first updated assignment (now without chapter 3) and the second updated assignment (now with chapter 3).
The teacher may desire to know the average time (or the time for a single student) that was spent on the first updated assignment and the average time for the second updated assignment, even though the students engaged the learning resources before the TOC was updated, i.e., when the chapters were arranged under different assignments. The learner engagement engine may aggregate the measurements of the plurality of student engagement activities that are in the first updated assignment (reading chapters 1 and 2) to determine a total amount of time spent by the student on the first updated assignment. The learning engagement engine may also aggregate the measurements of the plurality of student engagement activities that are in the second updated assignment (reading chapters 3 and 4) to determine a total amount of time spent by the student on the second updated assignment. Using the values from previous example, the total amount of time for the first updated assignment would be 2.3 hours and the total amount of time for the second updated assignment would be 2.7 hours. It should be appreciated that the measurements of the student engagement activities were taken when the students were working on the original TOC, even though the same measurements are now being used to analyze the new assignments in the updated TOC.
The learner engagement engine may now graphically display to the teacher using a client device the new metrics/analytics for the updated TOC using measurements taken when students were performing assignments defined by the original TOC.
In other embodiments, the learner engagement engine may send a text, email and/or message within the system to a teacher when a student is having problems as determined by a student being engaged below a preselected level (possibly selected by the teacher or a default level selected by the system). As an example, the system may detect that a student is reading for a far shorter time on average than other students in the class or is starting assignments much closer to a due date that other students. This may indicate for some students that they need an intervention or additional help to be successful. For other students this may indicate, if they are doing well on the assessments, that the student is not be challenged or learning as much as they could from the course. The teacher may wish to adjust the TOC if the course, based on the analytics, looks either too hard or too easy for the students.
In other embodiments, the learner engagement engine may look for past successful students (based on high assessment scores) and average one or more of their student engagement activities. As non-limiting examples, the learner engagement engine may aggregate how long past successful students took to perform an assignment and/or how long before an assignment was due the successful students started working on the assignments. Current students analytics may be compared to past successful student analytics and differences, over predefined limits, may be communicated to the teacher so that the teacher may intervene with the student. In other embodiments, student strategies (derived from the successful students) may be communicated to the teacher and/or any students that are deviating to far from the averages of the successful students.
The disclosed embodiments address three issues associated with learner engagement, including time spent accuracy, time spent loss, and engagement progression and completion. In other words At a high level, embodiments of the system that determine user activity and engagement based on loading and unloading present at least three issues that need to be solved in order to improve efficiency, including time spent accuracy, time spent loss accuracy, and engagement progression and completion. Non-limiting example scenarios may demonstrate these problems.
In some scenarios, a user may load a resource, but click within the UI on a resource unrelated to their current workload (e.g., YouTube, social media, etc.), thereby navigating away from the resource and therefore no longer engaged in the intended resource, class, etc. Even if it is related to their workload, it is impossible to know if it's related. Thus, the problem with the load/unload approach is that it only tracks the loading and unloading of resources.
The first issue is determining the accuracy of time of the learner spent engaged with the disclosed system, referred to herein as time spent accuracy. In the embodiments disclosed above, tracking a user's time spent engaged with the disclosed system is accomplished by capturing the timestamps of when an object loads and when it unloads. The disclosed system then calculates the span between those timestamps. This method assumes that between the load timestamp and the unload timestamp the learner is actively engaged, when in reality they may have stopped interacting with the page even though the page is still open or otherwise being accessed.
One problem with this approach is that human behaviors such as stepping away from the computer or shifting one's focus to a non-learning browser tab or activity conceptually stops the time spent learning process. Thus, in the embodiments disclosed below, a more accurate time spent provides instructors with more realistic learner insights in both their time spent and behaviors as well as provide more useful facts for research & personalization systems.
Thus, the most efficient approach may be to use time spent accuracy to identify a metric that tracks all user input as they are inputting it (e.g., scrolling, moving, checking things, clicking, hovering, etc.), in order to more accurately determine, using system logic, whether they're truly active or idle. Then, the system may identify patterns, record them, create templates and libraries, to more accurately determine learner engagement.
A second issue with the approach disclosed above includes a time spent loss. This time spent loss may be defined as the current implementation being limited in its approach in that it only calculates the time spent once an unload event is sent across multiple applications, possibly through a network. For scenarios where the unload event may be prevented from being sent through the network, the time spent by the learner will not be captured in its entirety. As non-limiting examples, such scenarios may include browser freezes and/or crashes, computer lockups, computer power outages, session timeouts etc., demonstrating time lost when the unload event does not fire. Capturing events in a stream of more frequent tracking and creating hooks into session management systems will minimize loss in these scenarios.
Thus, in some scenarios, the problem to solve is the time spent loss. For example, if a user closes a browser, there would be no unload event to determine the time spent in learner engagement. In other words, there would be a load event for a particular resource, but no matching unload event, making it impossible to determine how long a user was engaged. Other scenarios for time spent loss may include a session timeout, a closed browser, a computer crash, etc. What is needed is therefore a system that tracks, as efficiently as possible, real time and real activity types.
A third issue includes the limitation of tracking only the load and unload events against learning objects, referred to herein as engagement progression and completion, which prevents the system's ability to track progression and completion based on defined productive learning behaviors, thereby preventing the system from creating a more meaningful engagement score.
Tracking against loading, unloading and specific UI activities in the content allow the disclosed embodiments to define more valuable ‘completion’ and ‘progression’ events that are an aggregation of the actual clickstream activities that learners emit on learning objects.
The disclosed embodiments may include multiple features, such as real vs. feel time spent vs. idle. In some embodiments, the system may determine real time spent based on removing idle time. In some embodiments, the system may include a feature applying a feel time spent based on original approach of simple load/unload. In some embodiments, the system may Identify a difference between the two (e.g., load/unload vs. real time spent based on removing idle time), and comparing across learners may provide behavioral indicators. Some embodiments may include personalized planning. In some embodiments, the system may help students understand how much time it would take them to complete their assignment work based on past activity engagement data vs. the median or average time it takes for users.
Some embodiments may use a focus score. To accomplish this, the disclosed embodiments may establish a method to calculate (based on models and/or comparatives) whether or not the learner is productive and focused during their learning sessions (moving consistently through the content or jumping to non-content pages as a possible area of exploration).
Some embodiments may include anomaly detection, which may indicate a potential of cheating. As a non-limiting example, the system may detect patterns of behaviors, strings of specific events, or completion of unrealistic times, which may provide indicators to cheating algorithms (much the way that instructors sometimes use their instincts) plus time on task to detect cheating. This may be similar to credit card companies and their fraud detection where they pick up on specific usage patterns of newly stolen cards we may be able to detect patterns that indicate cheating.
Some embodiments may include a completion/progression pattern registry, library, & detection. In these embodiments, consumers can draw from default definitions or register specific definitions of ‘completion’ & ‘progress’ based upon content types and/or their own product model use cases. Using defined specific UI events (activity) on a given content combination and a specific defined order (sequence) can help to detect progression through the ‘session’ and/or completion that more closely reflects learning behavior or productive engagement as opposed to simple loading and unloading of content.
The following may be non-limiting examples of default content registrations: In one example, predefined registrations can be added to the system based on cross product model research of best practices. Examples might include: 1. ‘narrative_page’ entered as ‘shared’ in the registry and defined ‘completion’ as Completion=(Loads Page, Scrolls, Reaches at least 80% of page) and a progression assignment of Progression=(0.20, 0.20 0.60); or 2. ‘activity_page’ entered as ‘shared’ in the registry and defined ‘completion’ as Completion=(Loads Page, Clicks on Interactive, Scrolls, Reaches at least 80% of page) and a progression assignment of Progression=(0.20, 0.40, 0.20, 0.20)
The system may further include product consumer specific registrations. Non-limiting examples may include: 1. an eText reading page, wherein the consumer may register a definition specific to their product model (such as a narrative only page) like ‘etext_page_completion’ in the registry and define ‘completion’ as Completion=(Loads Page, Scrolls, Reaches Bottom of Page, Time>30 Seconds) and a progression assignment of Progression=(0.20, 0.30, 0.30, 0.20); 2. An Embedded Activity Page, wherein the consumer may register a definition specific to their product model (such as an activity embedded in a narrative page) like ‘embed_page_completion’ in the registry and define ‘completion’ as Completion=(Loads Page, Answers Question, Scrolls, Reaches Bottom of Page, Time>40 Seconds) and a progression assignment of Progression=(0.20, 0.50, 0.10, 0.10, 0.10); or 3. Video Segment Focus (a More complex example), wherein the consumer may register a definition specific to their product model (learner should at least watch the section on polynomials) like ‘cse_video_completion’ in the registry and define ‘completion’ as Completion=(Loads Video, Segment 1:10 sec-2:30 sec watched, min view 1×) and a progression assignment of Progression=(0.20, 0.80)
Referring now to
In a non-limiting example embodiment, the content may be accessible via a website portal, which loads and/or navigates to one or more web pages that contain the content. The user may navigate to a page, P1, and the UserLoads variable may be set to active in step 2400. The user may interact with page P1 for 20 seconds, and may then navigate away from the page in step 2405, causing the UserUnloads variable for page P1 may be set to active. Continuing the example, the user may navigate to a second page, P2 in step 2410, and the UserLoads variable for page P2 may be set to active. In a simplified example, the user may spend 20 seconds on page P1, then navigate away from page P1 to page P2, and the the UserUnloads variable for page P1 may be set to to inactive. In this example, only the the UserLoads variable and the UserUnloads variable may be tracked. Using only these two variables, the system is unable to determine if the user is actively engaged with the loaded pages in step 2415, and is unable to track user movement to additional GUI or browser tabs, etc. In theory, these pages could be loaded and sit inactively.
In some embodiments, when the UserLoads variable is set to idle, the UserUnloads variable for page P2 may be set to active. Continuing the example, if the user provides mouse or keyboard input related to page P2 in step 2415, the UserLoads variable may be set to active, and in some embodiments, the UserUnloads variable for page P2 may be set to idle. At this point in the example, in some embodiments, one or more browser activity tracking software modules may be activated, which may track and indicate user interaction activity, such as mouse activity (e.g., scrolling, mouse clicks, keyboard input activity, etc.) in step 2415. This browser or other UI activity may include multiple UI events (possibly derived from HTML and/or JavaScript DOM UI events, such as scrolling, onmouseclick, onmouseover, playing a video through a browser, etc.). The system may therefore actively capture these UI events, such as scrolling through a page, moving the mouse, tapping a keyboard, tapping a screen, etc. In the non-limiting example in
In step 2420, the system may store (possibly within the system logic or engagement profiles 300), a time interval representing a time during which there is no engagement with the UI. In the example in
The system may repeat this process in analogous steps 2430-2460, and in some embodiments, in steps 2465-2470, a timeout may be set for activity within the system. As a non-limiting example, if the timeout set for activity or inactivity is set to 30 minutes, if the disclosed system detects that 30 minutes of inactivity has passed, a session timeout may be recognized, and the UserUnloads variable for the relevant page may be set to idle, as well as timeout.
In some embodiments, the system may be configured to identify browser sessions, management sessions, etc. (30 minutes in
Embodiments such as that seen in
in step 2520, a user may navigate to narrative page2, and the UserLoads variable state is set to active. In step 2525, browser activity tracking may detect that the user has scrolled, and a new variable state for UserActivity is set to (scroll). However, in step 2530, the user may navigate away from narrative page2, without completing all UI activities. The data for UserLoads and UserActivity may again be processed as described above.
This data may be used to provide the system with process completion data and process progress data providing consumer applications with more accurate data. Using profile data, possibly from the engagement profiles 300, the system, possibly that described in more detail associated with
In some embodiments, the system may be configured to identify focus between tabs within the system, such as between browser tags, or selecting different active programs within the system. In these embodiments, the system may include various “listeners” that determine when a user has moved between various tabs or active programs. Based on the nature of the tab or program, the disclosed system may determine whether the user is active or idle.
As a non-limiting example, in
In some embodiments, system logic, engagement profiles, or other stored data, may be used to determine more accurate ranges of times of activity or inactivity to determine when a user is actively engaged or is idle. This data may be used to determine specific recommendations for each user. For example, the data collected for a single student may be used to plan the time needed for a student to complete an assignment, and allocate a certain amount of of time based on past performance, and taking into consideration previous active and idle time, etc.
For example, the system could analyze various patterns for Student 1 to recommend to Student 1 that they need an hour and a half to complete an assignment based on an analysis of previous patterns (and therefore needs additional time), even though the average in the course only takes about 45 minutes,
As noted above, the disclosed system may include multiple Libraries and/or SDKs, which may be used to provide many variations in the functionality described herein, and may be used to customize this functionality to a particular software product, content, etc. In some embodiments, the system may select an engagement profile 300, allowing a certain set of rules to be applied in calculating idle time. As an example (if the timestamps indicate that there has been 20 seconds between start and stop then add 20 seconds to the user's time spent but if the start and stop indicates 45 seconds from one UI event to the next UI event and the profile indicates that a maximum of 30 seconds can be applied to time on task with the remaining time applied to idle time, then only add 30 seconds to the time spent and 15 seconds to the ‘idle’ time spent.
In another example, the system may be configured to store a specific time interval, during which it collects UI events. In the example above, the system may be configured to collect UI events and log them every 30 seconds. By doing so, the disclosed system may avoid losing data for a 5-minute interval where a page or other resource is loaded but never unloaded, since only 30 seconds of data would be lost during a 5 minute interval of inactivity (e.g., if the system or browser crashes, etc.).
In some embodiments, the libaraies/SDKs may contain instructions which cause the system to store the UI events in a queue every 30 seconds, and may pass this data to the input processing and messaging system 320, which may then parse and process the data in the queue, and separate idle time from active time. Over time, the disclosed system may use the logged data to generate a model, which, for example, may include an algorithm to define the time interval for individual students, classes, courses, etc. to identify idle time within the system.
In some embodiments, system logic and/or the engagement profiles may be configured to define parameters such as the time interval according to differences between running software applications, and/or software applications that access the disclosed system through an API, for example. As a non-limiting example, in these profiles based on software applications, the system may determine idle time and the associated time interval for a program that requires extensive user activity in a much shorter time interval than a program that only requires reading, and therefore may include intervals with less user activity.
In some embodiments, the log of user input data may be passed through an activity engagement processor 350, which may select an engagement profile. that will allow a certain set of rules to be applied in calculating idle time. As an example, if the timestamps indicate that there has been 20 seconds between start and stop then add 20 seconds to the user's time spent but if the start and stop indicates 45 seconds from one UI event to the next UI event and the profile indicates that a maximum of 30 seconds can be applied to time on task then only add 30 seconds to the time spent and 15 seconds to the ‘idle’ time spent.
In some embodiments, the system may process multiple messages by stringing together UI event start/stop timestamps in order to ensure not missing any events (in the scenario where the system events every 30 seconds). In some embodiments, the system may process the completion and progression update events based on reading in the definitions about productive engagement and calculating completion/progress from the UI activities strung together from logs. In some embodiments, the system may run the ‘Engagement Pipeline Pig’ on an hourly (frequency to be determined) interval to clean up any events that are still ‘open’. In some embodiments, as the EPP passes over the queue if the learner has not had an subsequent event in the queue over the last hour then process any engagement time in the remaining event, which may potentially apply a dynamic time spent based on a predicted estimate or personalized estimate.
The disclosed embodiments may have one or more default content registrations to choose from. As non-limiting examples, predefined registrations can be added to the system based on cross product model research of best practices. Examples might include: narrative_page entered as ‘shared’ in the registry and defined ‘completion’ as Completion=(Loads Page, Scrolls, Reaches at least 80% of page) and a progression assignment of Progression=(0.20, 0.20 0.60); or ‘activity_page’ entered as ‘shared’ in the registry and defined ‘completion’ as Completion=(Loads Page, Clicks on Interactive, Scrolls, Reaches at least 80% of page) and a progression assignment of Progression=(0.20, 0.40, 0.20, 0.20).
The disclosed embodiments may include product consumer specific registrations. In some embodiments this may include an eText Reading Page, in which a consumer may register a definition specific to their product model (such as a narrative only page) like ‘etext_page_completion’ in the registry and define ‘completion’ as Completion=(Loads Page, Scrolls, Reaches Bottom of Page, Time>30 Seconds) and a progression assignment of Progression=(0.20, 0.30, 0.30, 0.20).
The disclosed embodiments may include an embedded activity page, wherein the consumer may register a definition specific to their product model (such as an activity embedded in a narrative page) like ‘revel_embed_page_completion’ in the registry and define ‘completion’ as Completion=(Loads Page, Answers Question, Scrolls, Reaches Bottom of Page, Time>40 Seconds) and a progression assignment of Progression=(0.20, 0.50, 0.10, 0.10, 0.10).
In a more complex example, these embodiments may include a video segment focus, in which the consumer may register a definition specific to their product model (learner should at least watch the section on polynomials) like ‘cse_video_completion’ in the registry and define ‘completion’ as Completion=(Loads Video, Segment 1:10 sec-2:30 sec watched, min view 1×) and a progression assignment of Progression=(0.20, 0.80).
Other embodiments and uses of the above inventions will be apparent to those having ordinary skill in the art upon consideration of the specification and practice of the invention disclosed herein. The specification and examples given should be considered exemplary only, and it is contemplated that the appended claims will cover any other such embodiments or modifications as fall within the true scope of the invention.
The Abstract accompanying this specification is provided to enable the United States Patent and Trademark Office and the public generally to determine quickly from a cursory inspection the nature and gist of the technical disclosure and in no way intended for defining, determining, or limiting the present invention or any of its embodiments.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/045966 | 8/12/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62885757 | Aug 2019 | US |