This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
The technology described herein helps a student learn how to read by determining a present reading level for the student and dynamically providing feedback that accurately identifies the specific oral reading errors made. Currently available reading applications do not perform an adequate analysis to identify the types of oral reading mistakes being made. The failure to identify oral reading mistakes, such as hesitations (‘uh-uh . . . pony’), word omissions, word or syllable insertions and other errors, results in inaccurate student assessment and proficiency scoring, as well as suboptimal learning feedback being provided to the student. The inability to handle these errors properly reduces learning outcomes and makes existing remote learning tools less useful.
When oral reading errors are present, existing systems may either ask the user to resubmit the audio in question or provide a nonsense response based on the system's inability to handle the error. This is especially true when the oral errors result in an overall accuracy rate of less than 95%. An accuracy rate is the percentage of oral content (e.g., words, letters, and phones) that match the expected content. A nonsense response typically causes the user to restate the original audio or give up. Proper handling of these errors does not currently exist outside of the technology described herein.
Further, existing system may simply identify an accuracy rate, without identifying the specific errors being made. Allowing a student to understand the type of errors made helps the student avoid the same errors in subsequent efforts and helps the student understand what correct reading is.
The technology described herein is illustrated by way of example and not limitation in the accompanying figures in which like reference numerals indicate similar elements and in which:
The various technologies described herein are set forth with sufficient specificity to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.
The technology described herein helps a student learn how to read by determining a present reading level for the student and dynamically providing feedback that accurately identifies the specific oral reading errors made. Currently available reading applications do not perform an adequate analysis to identify the types of oral reading mistakes being made. The failure to identify oral reading mistakes, such as hesitations (‘uh-uh . . . pony’), word omissions, word or syllable insertions and other errors, results in inaccurate student assessment and proficiency scoring, as well as suboptimal learning feedback being provided to the student. The inability to handle these errors properly reduces learning outcomes and makes existing remote learning tools less useful.
When oral reading errors are present, existing systems may either ask the user to resubmit the audio in question or provide a nonsense response based on the system's inability to handle the error. This is especially true when the oral errors result in an overall accuracy rate of less than 95%. An accuracy rate is the percentage of oral content (e.g., words, letters, and phones) that match the expected content. A nonsense response typically causes the user to restate the original audio or give up. Proper handling of these errors does not currently exist outside of the technology described herein.
Further, existing system may simply identify an accuracy rate, without identifying the specific errors being made. Allowing a student to understand the type of errors made helps the student avoid the same errors in subsequent efforts and helps the student understand what correct reading is.
The technology described herein may provide interfaces tailored to three different kinds of users. A student interface presents text to a student and captures an audio recording of the student reading the text. The student's reading recording may be analyzed for errors and assigned a proficiency score. The errors and proficiency score may be presented to the student, a student's parent or guardian, and a student's teacher. An assignment interface allows a student to review reading assignments that need to be completed or have been completed. A library interface provides reading assignments that a student may select. A recommendation engine can recommend reading assignments, books, games, and other educational content to a student based on reading level, genre interests, and sounds, word groups, grammar or other aspects the student needs to practice. As an example of grammar, if a student struggled to identify capital letters, then a game may be recommended that requires the student to select between capital and lower case letters.
The teacher interface allows the teacher to view assignments completed by each student in the class. Various classroom statistics can be provided to determine how the class as a whole is doing on reading assignments. The classroom statistics can help a teacher allocate more or less time to reading in the classroom based on the class performance. The classroom statistics also help teachers focus attention on students that need the most help. Teachers may be provided an interface that allows them to review and correct misclassifications made by the error classification system. A misclassification can be assigning an error where none occurred or the failure to identify an error in the reading. A misclassification also includes assigning the wrong error type to an error. The misclassification can be originally reported by a student or parent. The student or parent can report the misclassification for teacher review. Confirmed errors may be used to tune the classifier to a student's specific speaking style.
Having briefly described an overview of aspects of the technology described herein, an operating environment in which aspects of the technology described herein may be implemented is described below in order to provide a general context for various aspects.
Turning now to
Among other components not shown, example operating environment 100 includes a number of user devices, such as user devices 102a and 102b through 102n; a number of data sources, such as data sources 104a and 104b through 104n; server 106; and network 110. Each of the components shown in
User devices 102a and 102b through 102n may be client devices on the client-side of operating environment 100, while server 106 may be on the server-side of operating environment 100. The user devices may facilitate output of text for a student to read aloud and recording the student's reading in an audio file. The audio file that contains the student's oral reading may be communicated to other components of the system, such as the error identifier 220. In another aspect, the user devices 102a connect to a webpage or other remote application that receives an audio stream of the user reading. The user devices may run applications that facilitate the technology described herein. In one aspect, the technology runs entirely on a user device 102. In other aspects, a hybrid model is used where components of the technology reside on the server side of the environment 100, while interactions occur through the user devices 102. The devices may belong to many different users and a single user may use multiple devices.
Server 106 may comprise server-side software designed to work in conjunction with client-side software on user devices 102a and 102b through 102n to implement any combination of the features and functionalities discussed in the present disclosure. For example, the server 106 may run the reading instruction system 200, which assigns a reading level to a reading attempt and/or provides error feedback. The server 106 may receive digital assets, such as files of documents, audio files, video files, emails, social media posts, user profiles, and the like for storage, from a large number of user devices belonging to many users. This division of operating environment 100 is provided to illustrate one example of a suitable environment, and there is no requirement for each implementation that any combination of server 106 and user devices 102a and 102b through 102n remain as separate entities.
User devices 102a and 102b through 102n may comprise any type of computing device capable of use by a user. For example, in one aspect, user devices 102a through 102n may be the type of computing device described in relation to
Data sources 104a and 104b through 104n may comprise data sources and/or data systems, which are configured to make data available to any of the various constituents of operating environment 100, or reading instruction system 200 described in connection to
Operating environment 100 may be utilized to implement one or more of the components of reading instruction system 200, described in
Referring now to
Reading instruction system 200 includes reading interface 205, speech-to-text component 210, error identifier 220 (and its components 221, 222, 224, 226, 228, 230, 232, 234), and cueing system 250 (and its components 252, 254, 256). These components may be embodied as a set of compiled computer instructions or functions, program modules, computer software services, or an arrangement of processes carried out on one or more computer systems, such as computing device 1500 described in connection to
In one aspect, the functions performed by components of reading instruction system 200 are associated with one or more applications, services, or routines. In particular, such applications, services, or routines may operate on one or more user devices (such as user device 102a), servers (such as server 106), may be distributed across one or more user devices and servers, or be implemented in the cloud. Moreover, in some aspects, these components of reading instruction system 200 may be distributed across a network, including one or more servers (such as server 106) and client devices (such as user device 102a), in the cloud, or may reside on a user device, such as user device 102a. Moreover, these components, functions performed by these components, or services carried out by these components may be implemented at appropriate abstraction layer(s), such as the operating system layer, application layer, hardware layer, etc., of the computing system(s). Alternatively, or in addition, the functionality of these components and/or the aspects described herein may be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that may be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. Additionally, although functionality is described herein with reference to specific components shown in example reading instruction system 200, it is contemplated that in some aspects functionality of these components may be shared or distributed across other components.
Several data items are also shown in
The reading interface 205 can be a dedicated application-specific interface, such as provided by a reading application running on a smart phone, tablet, e-reader, laptop, or other user device. A reading application is an application specifically designed to help a student read. In another aspect, the reading interface 205 can be provided by a web browser or other software running on a user device. For example, the reading interface 205 could be a plug-in or other add-on to an e-reading application, a document application, or other content presentation application.
The reading interface 205 captures audio data 202 of a student reading the input text 201. The audio data 202 could be an audio file that stores all proportion of the audio recording of the user reading. In another aspect, the audio data 202 takes the form of audio stream communicated from the user device on which the reading interface 205 is output over a network to a device hosting the speech-to-text component 210. In aspects, the speech-to-text 210 component can reside on the user device. The speech-to-text component 210 converts the audio data 202 into converted text 203 that represents a textual version of the words and other sounds (if any) spoken by the student and recorded in the audio data 202. In one aspect, the speech-to-text component 210 does not employ natural language understanding technology that does not resolve ambiguous audio sounds according to a language model or understood grammar. Using such a model may eliminate some errors made in the oral reading attempt and a goal of the technology herein is to capture any errors made. Accordingly, aspects of the technology may use speech-to-text technology that produces closer to a phonetic or literal conversion of the oral reading attempt. The audio data may be stored in a file for later use, including for replay to the student, teacher, or parent.
The converted text 203 is then input to the error identifier 220 along with the input text 201. The error identifier 220 identifies and classifies errors found within the converted text 203 and provides the classified errors to the cueing system 250, which in turn generates an MSV (meaning, structural, and visual cues) score 258. The MSV score 258 may be provided to the user through the reading interface 205, through an email, through text, or through some other mechanism. The MSV score may also be used by other components (not shown) to select and/or generate future reading assignments for a student. The MSV score 258 may be associated with the user via a user profile stored in the reading database 260 or elsewhere. The MSV score is a type of proficiency score.
The identified errors and their associated classifications can be used to generate a feedback report 259. The feedback report can be provided to the user through the reading interface 205 or through some other mechanism. In one aspect, the feedback report shows all or parts of both the input text 201 in the converted text 203. Where the input text 201 and the converted text 203 agree, a single text line may be shown. Where an error is detected in the converted text 203, the input text 201 will not agree with the converted text 203. For this portion, the portion of converted text 203 containing the error may be displayed on top of the corresponding input text 201. This allows the student to see the input text being read and the converted text representing the student's oral reading attempt. The classification of the error may be communicated with a label, color-coding, or some other method.
TABLE 1 shown below includes a possible reading report.
As can be seen, table 1 shows the converted text, the actual text, and a reading output that labels the error according to the category into which the error was classified. In this non-limiting example, the following labels are used. Insertions (I), Omissions/Deletions (D) and Substitutions (S).
The error identifier 220 identifies errors in an oral reading attempt, as reflected in the converted text 203, and assigns a classification to each error. The error identifier 220 includes an error detector 221, an omission classifier 222, an appeal classifier 224, an attempt classifier 226, an insertion classifier 228, a self-correction classifier 230, a substitution classifier 232, and error manager 234.
The error detector 221 detects differences between the input text 201 and the converted text 203. In one aspect, any difference is classified as an error. Once the error is identified, it is classified into a category. Though shown as a series of components, the classification could be done by a single component, such as a multi-class classifier. A multi-class classifier is a machine learning technology that receives training data comprising labeled errors. For example, a substitution could be shown and labeled. The training data could comprise a sample input text and a sample converted text. During training, the training data is input to the classifier until the classifier is able to correctly assign a classification to an unlabeled error. The classification may take the form of a class confidence score. In one aspect, a transformer model is used to assign the error classification. Various heuristic approaches are also possible and a combination of heuristics and machine learning are possible.
The omission classifier 222 identifies errors of omission, which occurs when a word in the input text 201 is not read by the reader. Omissions may be detected by a heuristic that looks for a missing word or multiple missing words. When the error fits the omission pattern then the error may be classified as an omission.
The appeal classifier 224 identifies appeals, which are requests by the reader for help. For example, the reader may ask, “what is this word?” This may be considers as a type of insertion where words not in the input text 201 are found in the converted text 203. In addition, a determination may be made that other words are not missing from the surrounding text. If words are missing, then a substitution may have occurred. A heuristic approach may look for commonly asked questions in the inserted text. If the inserted text does not match a commonly asked question, then it may be classified as an insertion, but not an appeal. The heuristic could look for an insertion in the form of a question (for example a clause starting with who, what, when, where, why, or how) could indicate a question is being asked.
The attempt classifier 226 identifies an aborted attempt to read a word. This may also be described as hesitation. The attempt could initially be a type of insertion where extra words or sounds not forming a word are in the converted text. A heuristic could look for phonetic sounds similar to those in the word coming after the attempt and identify an attempt. If the user goes on to skip the word, then an attempt with an omission could be detected.
The insertion classifier 228 detects an insertion. An insertion occurs when words that are not in the input text 201 occur in the converted text 203. An error may be classified as an insertion, when the extra words are not an attempt, self correction, or appeal, which are two specific types of insertions.
The self-correction classifier 230 occurs when the reader repeats a portion of the text where an error was previously made. The self-correction is a type of insertion and will be detected as an error because of extra words in the converted text 203. A heuristic could identify a self-correction when the inserted words include one or more words from the input text that are in a phrase where an error occurred.
The substitution classifier 232 detects a substitution, which occurs when one words is substituted for another word.
The error manager 234 keeps a running record of the errors being made and a classification assigned to the errors. These errors may be output in near real time in a running reading record report.
The cueing system 250 assigns a competency score, shown as MSV score 258, to the oral reading attempt using the errors identified and classified by the error identifier 220. Components of the curing system 250 include the meaning cues component 252, the structural cues component 254, and the visual cues component 256. The MSV score is a combination of the meaning cues, structural cues, and visual cues found in the identified errors.
The meaning cues component 252 identifies meaning clues using the output from the error identifier 220. The meaning cue could be output as a score. A strong meaning cue detected in the error could result in a high score and a low meaning cue a low score. The input text 201 and converted text 203 may also be inputs to the meaning cues component 252. Meaning cues, also known as semantic cues, typically use the reader's knowledge of the real world to help the reader determine appropriate words in context. An example might be “John put the food in the ______.” Based on the reader's knowledge of how the world works, the word “bowl” would be more likely than the word “bawl.” In other words, the meaning clues attempt to determine if a word used in the error makes sense given the surrounding words. If the word does not make sense, then a low meaning cue score could be assigned. The low score might indicate the reader is not using context to help read the passage.
The structural cues component 254 identifies structural clues using the output from the error identifier 220. The input text 201 and converted text 203 may also be inputs to the structural cues component 254. The structural cue could be output as a score. A strong structural cue detected in the error could result in a high score and a low structural cue a low score. Structural cues, also known as Syntactic cues, factor in things like grammar, punctuation and word order to help a reader identify the likely words. In a sentence like “The ate the food.” The sequence would cue the reader structurally that the word they are looking for is likely to be a noun. In other words, the structural clues attempt to determine if a word used in the error makes sense grammatically given the surrounding words. If the word does not make sense grammatically, then a low structure cue score could be assigned. For example, if the reader made an error my including a verb or adjective in the above sentence, then a low structural score could be assigned. The low score might indicate the reader is not using grammar context to help read the passage.
The visual cues component 256 identifies visual clues using the output from the error identifier 220. The input text 201 and converted text 203 may also be inputs to the visual cues component 256. Visual Cues, also known as Graphophonic Cues, use letter-sounds to try to help the reader determine the word. An example visual cue occurs when a reader tries to sound out a word by looking at the visual patterns in a word. If the word or words in the error do not have the same sounds as the correct word, then a low visual cue score could be assigned. Conversely, if the word or words in the error have similar sounds to those in the original text, then a high visual cue score could be assigned.
The MSV can be a grouping each cue score. In other words, the MSV score can have three attributes and three corresponding scores. The three attributes (MSV) could be combined in a weighted combination to determine a final MSV score. The MSV score could serve as an overall reading competency score. The MSV could be combined with other factors, such as the total number of errors, the reading level of the input text, and other factors to determine a separate competency score. The individual components of the MSV score can help teachers selected assignments that address a student's weakness.
Turning now to
Reading interface system 300 includes the reading interface 205 (and its components 310, 312, 314, 316, 320, 322, 324, 330, 332, 334, 340, and 342), user profile store 350 (and its components 352, 354, and 356), recommendation system 360, and library interface 362. These components may be embodied as a set of compiled computer instructions or functions, program modules, computer software services, or an arrangement of processes carried out on one or more computer systems, such as computing device 1500 described in connection to
In one aspect, the functions performed by components of reading interface system 300 are associated with one or more applications, services, or routines. In particular, such applications, services, or routines may operate on one or more user devices (such as user device 102a), servers (such as server 106), may be distributed across one or more user devices and servers, or be implemented in the cloud. Moreover, in some aspects, these components of reading interface system 300 may be distributed across a network, including one or more servers (such as server 106) and client devices (such as user device 102a), in the cloud, or may reside on a user device, such as user device 102a.
The reading interface component 205 generates a series of interfaces that help users, such as students learn to read. The series of interfaces can also help teachers and parents assign reading work to students and monitor student progress. The interfaces can help students better understand their reading performance. Aspects of the technology described herein are not limited to the interfaces described with reference to
The student interface 310 includes an assignment interface component 312, arcade game interface component 314, and phonetic game interface component 316. As an initial step in a user may provide credentials to login. In order to login, a student may provide a username, password, biometric input, or some other credential. Upon logging in, the student may be given access to interfaces and resources within the reading instruction system to which the student is authorized to access. The interfaces and resources that a given student is authorized to access may be managed using information in the student profile 352. The student profile may associate the student with a particular user ID that is in turn associated with the student's past reading assignments, in-progress reading assignments, assigned reading assignments, rewards, and the like. Links to the resources and interfaces that a student can access may be provided on the student's homepage.
In one embodiment, the student is taken to an assignment-interface generated by the assignment interface component 312. The assignment interface may be similar to the assignment interface 400 described subsequently with reference to
The library interface 362 can also display recommendations generated by the recommendation system 360. The recommendations can be for books, reading assignments, games, and the like. The recommendation system 360 can take various factors into account when making a recommendation. These factors may be retrieved from a student profile 352. The factors can include survey data provided by a student. The survey data can include an interest survey. The genres of interest to particular student can be learned through the interest survey. The factors can also include a reading level appropriate for the student. The reading level appropriate for the student can be calculated periodically and updated to match the student's increased reading level as progress is made. The recommendations can be viewed by the student, teacher, or parent. In the case of the teacher or parent, the recommendation will be for a student.
The recommendation engine 360 can also generate a recommendation based on specific reading performance data for individual student. The performance data can indicate word groups or grammar that the student needs to practice. In addition to word groups, the recommendation system 360 can generate recommendations based on sounds (e.g., phonemes) the student needs to practice pronouncing. The sounds can be identified by looking at a confidence score assigned by a speech-to-text converter when processing the audio recording of the student reading. In the case of the word groups or sounds, reassignments may be recommended that emphasize the desired word groups and sounds.
The assignment interface component 312 can provide a link to a reading interface generated by the reading-reception interface component 340. The reading interface presents a text to be read and captures an audio recording of the student reading the text. An example reading interface is described subsequently with reference to
The arcade game interface 314 provides an interface through which arcade games may be played using reward points earned by completing reading assignments. As used herein, access to the arcade games in the arcade game interface 314 may be limited by time and other factors. In a point system, points earned by completing assignment may be associated with the student profile 352. The arcade game interface 314 may determine whether a given student has enough reward points to play game. Upon playing a game for a designated amount of time, the arcade game interface 314 may deduct points a corresponding amount of reward points from the student's reward account.
The phonetic game interface component 316 generates phonetic games for the user to play. In one example, the phonetic game includes a picture of an entity, such as a cat. The user then attempts to spell cat to win the game. As an alternative game mode to spelling cat, the student may simply pronounce the word cat. The phonetic game interface component 316 generates games that encourage the user to pronounce different sounds.
The teacher interface 320 gives a teacher access to resources within the reading instruction system. An identification associated with an individual teacher can be used to manage access to various resources, such as class and student records. The teacher profile 354 can associate the teacher to a class, which is in turn associated with student identifications for students in the class. The teacher profile 354 can also associate the teacher with individual students. These relationships are used to give the teacher access to reading assignments and other information associated with students in his/her class. The teacher view component 322 generates a teacher view, which may be similar to the teacher view 1100 described subsequently. The teacher view can provide a dashboard that displays reading statistics for the entire class. The teacher view can also provide reading data for individual students. The teacher view can also allow a teacher to assign new reading assignments to a student and/or class. The teacher view can serve as a homepage for teachers. The teacher can view a reading-performance interface generated by the result-output interface component 342. The reading-performance interface can display specific errors made when a specific student reads a specific text. The reading performance interface can also allow the teacher to listen to an audio recording of the student's reading effort.
The teacher interface 320 includes an error-approval interface component 324 that allows a teacher to approve or deny error suggestions made by teachers and students. If the teacher agrees that the classifier made a mistake, the mistake can be corrected through the error approval interface 324. A misclassification record is created and can be used to change a proficiency score assigned based on the reading assignment. The misclassification record can also be used as training data to tune an error classifier.
The parent interface component 330 gives parent's access to their students reading assignments and performance metrics. A parent profile 356 may be used to associate a parent with one or more students. Once associated with a student, the parent may view information associated with the student. The parent view 332 can provide a view that is very similar to the student's assignment interface described in
Turning now to
To the left of the assignment interface 400 are a home control 404 and a library control 406. The home view is shown. The library control 406 will take the user to a library of available books. Students may select books from the library to add to their assignments. Teachers and parents may assign books to a student. Though not shown, the library view can allow users to sort books by name, type, author, reading level, genre, and other characteristics.
The personal assistant 408 communicates a message to the user. In this case, the message reminds the user to check back for homework. The personal assistant 408 may communicate messages selected based on a heuristic. For example, a message could be selected to highlight an assignment the user should view. For example, recently completed assignments may be highlighted to teachers and parents. The highlighting can remind the teachers and parents to review a detailed record of the student's reading performance. A message could help a student achieve a reading goal, such as reading 50 pages per week. In this example, the personal assistant could select an existing assignment that would cause the student to complete his or her goal upon completing the existing assignment. Various heuristics can be used to generate appropriate messages.
Near the top of the assignment will interface 400, a progress-overview section 405 is provided. The progress-overview section 405 and overall reading accuracy score 410, a book progress indicator 412, test progress indicator 414, and a book completion record 416. The overall reading accuracy score 410 combines an accuracy score assigned to various completed reading homework to provide an overall score. In an embodiment, the overall score is based on a threshold number of the most recently read books, such as the last ten books. Alternatively, the score could be based on books read within a threshold period, such as the last month or year. In aspects, the overall score is an average of all scores in the calculation. In another aspect, the overall score is a weighted average, which favors the more recently completed reading assignments.
The book progress indicator 412 communicates how many assigned books have been read and how many assigned books are left to read. The test progress indicator 414 communicates how many assigned tests have been taken and how many assigned tests remain to be taken. The book completion record 416 communicates how e-books have been read by the student.
Various controls are provided for customizing the assigned homework and tests shown in the home works and tests section 420. The search bar 422 allows the user to search for a book by name. The type selector 424 allows a user to select a type of assignment to be displayed. In this example, the type selector 424 allows a user to select either homework or tests. The date due control 426 allows the user to filter based on due date of an assignment. In this example, only assignments due this week are shown. The name sort control 428 allows the user to alphabetically sort assignments based on a book name. The type sort control 430 allows the assignments to be sorted by type. The due date sort control 432 allows the user to sort the assignments chronologically by due date. The completion-status sort control 434 allows the user to sort assignments based on completion status. The completion status can include not started, finished, and in progress. In one embodiment, the degree of completion is indicated by a color. Red may indicate that the reading assignment is less than 25% complete, or may indicate that the reading assignment is between 26% and 75% complete, green may indicate that the reading assignment is more than 75% complete. In addition to a color, the size of the completion bar may be proportionally sized to the reading progress. The action sort control 436 allows the user to sort the assignments based on an associated action, such as start reading 442, continue 446, and re-read 454.
Five assignments are shown. The first assignment 440 is a homework assignment that the student has not started. The user can start the assignment by selecting the start reading 442 action control. The second assignment 444 is a test that is in progress. The student may continue the test by selecting the continue 446 action control. The third assignment 448 is a homework assignment that is in progress. The fourth assignment 450 is a homework assignment that is in progress. The fifth assignment 452 is a homework assignment that has been completed.
Turning now to
The reading-progress interface 500 includes a reading-goal progress interface 510. The reading-goal progress interface 510 visually depicts a student's progress towards completing a daily reading goal. In the example, the reading goal is 20 minutes and the student needs to read for another minute and a half to achieve the 20-minute goal. The progress made towards achieving a goal is indicated textually (e.g., 90%) and with a timer display. The reading-goal progress interface 510 also includes a trend line showing the time a student spends reading daily. The trend line can be color-coded to indicate whether the trend is improving, staying the same, or decreasing. Green may be associated with an increasing trend. Red may be associated with a decreasing trend. Orange may be associated with a level trend. The reading-goal progress interface 510 includes a control that allows a detailed view of the reading statistics for the student. The detailed view could show each book read over a period along with the time spent per day on each book.
The reading-progress interface 500 includes a reading-proficiency interface 512. The reading-proficiency interface 512 provides a proficiency score achieved for reading assignments within a plurality of assignments used to calculate the proficiency level. The plurality of books could be selected based on a time period, such as a day, week, two weeks, month, or year. Other customized time periods are possible. In this example, the reading proficiency level is 64% and the goal for the student is 80%. Alternatively, the proficiency score could be based on the last threshold number of reading assignments, such as the last ten assignments. A trend line shows that the reading proficiency level achieved by the student has been decreasing. The trend line could be based on a daily proficiency score calculated for the student. The trend line can be color-coded to indicate whether the trend is improving, staying the same, or decreasing. Green may be associated with an increasing trend. Red may be associated with a decreasing trend. Orange may be associated with a level trend. The reading-proficiency interface 512 includes a control that allows a detailed view of the reading statistics for the student. The detailed view could show each reading assignment completed or attempted over a time period along with a proficiency calculated for the reading assignments.
The reading-progress interface 500 includes a Lexile-level interface 512. The Lexile Framework for Reading is an educational tool that uses a measure called a Lexile to match readers with books, articles and other leveled reading resources. Readers and books are assigned a score on the Lexile scale, in which lower scores reflect easier readability for books and lower reading ability for readers. The Lexile framework uses quantitative methods, based on individual words and sentence lengths, rather than qualitative analysis of content to produce scores.
The Lexile-level interface 514 provides a Lexile level achieved for reading assignments within a plurality of assignments used to calculate the Lexile level. The plurality of books could be selected based on a time period, such as a day, week, two weeks, month, or year. Other customized time periods are possible. Alternatively, the Lexile score could be based on the last threshold number of reading assignments, such as the last ten assignments. In this example, the reading Lexile level is level 9 (1100/1125 points) and the goal for the student is level 15. A trend line shows that the Lexile level achieved by the student has been increasing. The trend line could be based on a daily Lexile score calculated for the student. The trend line can be color-coded to indicate whether the trend is improving, staying the same, or decreasing. Green may be associated with an increasing trend. Red may be associated with a decreasing trend. Orange may be associated with a level trend. The Lexile-level interface 514 includes a control that allows a detailed view of the reading statistics for the student. The detailed view could show each reading assignment completed or attempted over a time period along with a Lexile level calculated for the reading assignments.
The reading-progress interface 500 includes a word family interface 516. The word family interface 516 shows the number of assignments the student has completed that emphasize a particular word family. The reading-progress interface 500 includes a comprehension question interface 518. The comprehension question interface 518 displays an average comprehension measure indicating how the student has been doing on recent comprehension tests. A comprehension test measures a reader's understanding of a reading assignment. The recent comprehension tests could be selected based on a time period, such as a day, week, two weeks, month, or year. Other customized time periods are possible. Alternatively, the average comprehension measure could be based on the last threshold number of reading assignments, such as the last ten assignments.
Turning now to
The reading interface 600 includes a title the book 605 from which the reading assignment is derived. A reading assignment may be an entire book, a chapter of a book, multiple chapters of the book, a page range, or some other designated portion of the book. The reassignment could also be a short story, news article, or the like. The progress tracker 610 communicates to the student progress made during the reassignment. The progress tracker 610 shown indicates that the assignment has been finished. The audio processing guide 614 communicates that the proficiency score and error record is being generated for the completed reading assignment. Upon completion, a reading-analysis interface 700 may be presented.
Turning now to
The homework analysis section 716 includes a page control 718. The page control 718 allows a user to navigate from page-to-page of the reading assignment. The text on a single page of the homework analysis interface may correspond to the text presented on a single page of the reading interface 600.
The audio interface 719 includes an audio control 722 and audio progress tracker 720. The audio control 722 allows a recording of the student reading the text 730 to be output to speakers or headphones. The audio progress tracker 720 communicates what part of the recording being output.
The homework analysis section 716 explains each error made by the student while reading the text 703 on the page being displayed. An insertion error 724 is a word added that is not in the text. A deletion 726 is a word that is in the text but not pronounced in the audio recording of the students reading effort. A substitution 728 occurs when the student replaces a first word in the text with a second word that is not in the text. Each of the explanations may be associated with a color. The color you may be used to point out errors of each type within the text. For example, a substitution can be associated with blue and the first error 732 also be shown in blue. The first error 732 is a substitution because the student read settle, which is not in the text, instead of little. The second error 734 is deletion because the student left out the letter a. The third error 736 is a substitution because the student spoke “hat” instead of “had.” The fourth error 738 is an insertion because the user read “the” when “the” is not in the text.
Turning now to
Turning now to
The misclassification record may be used for retraining a classifier, such as the one or more classifiers associated with the error identifier 220. The misclassification record can be used to generate supplemental training data that may be used to improve the classification accuracy. In an aspect, the misclassification record is used to generate a student-specific version of a classifier. This type of training may be described as tuning the classifier. A misclassification record may be used to generate training data that includes an audio of the student reading along with the correct classification as a training label. As an alternative to or in addition to the student-specific training, misclassification records may be aggregated and used for general retraining.
Turning now to
Turning now to
The teacher selection control 1101 displays a teacher's name. Users with administrative privileges, such as a principal, may use the teacher selection control 1101 to select a teacher. A teacher may only have access their classes and students. The dashboard includes a class proficiency score 1120, and average error score 1022, total students 1124, and books read 1126 during a time. Each dashboard widget may include a trend line. The trend line can be color-coded to indicate whether the trend is improving, staying the same, or decreasing. Green may be associated with an increasing trend. Red may be associated with a decreasing trend. Orange may be associated with a level trend.
The student view 1128 shows information for students in a selected classroom. They view selector 1130 is set to view all students. However, various criteria may be used to display a subset of students, such as those with a particular reading level, progress score, or the like. The time control 1132 designates a period of time for which statistics are gathered, such as a week. In a first column 1140, student names are displayed. In a second column 1142, a proficiency score associated with each student is displayed. In a third column 1144, an amount of books read during the time period is displayed. In a fourth column 1146, an amount of assignments completed during the time period is displayed. The progress section 1148 shows progress made during a time period, such as a week, selected with the range control 1150. The progress section includes a book completion interface 1152 that displays a total number of books read in the class and the total books assigned. The comprehension interface 1154 shows the total comprehension questions answered correctly and questions asked.
Turning now to
At step 1230, the method comprises converting the audio data to converted text. The process of converting speech to text has been described previously with reference, at least, to the speech-to-text component 210 of
At step 1240, the method comprises identifying an error in the converted text by detecting a difference between the converted text the text. At step 1250, the method comprises classifying, with a machine classifier, the error into an error category. The classification to reading errors has been described previously with reference to
At step 1260, the method comprises generating a reading competency score using the error category. The calculation of a reading competency score has been described previously with reference to
Turning now to
At step 1330, the method comprises converting, using a speech-to-text engine, the audio data to converted text. The process of converting speech to text has been described previously with reference, at least, to the speech-to-text component 210 of
Turning now to
At step 1430, the method comprises converting, using a speech-to-text engine, the audio data to converted text. The process of converting speech to text has been described previously with reference, at least, to the speech-to-text component 210 of
Referring to the drawings in general, and initially to
The technology described herein may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program components, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program components, including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks or implements particular abstract data types. The technology described herein may be practiced in a variety of system configurations, including handheld devices, consumer electronics, general-purpose computers, specialty computing devices, etc. Aspects of the technology described herein may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.
With continued reference to
Computing device 1500 typically includes a variety of computer-readable media. Computer-readable media may be any available media that may be accessed by computing device 1500 and includes both volatile and nonvolatile, removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data.
Computer storage media includes RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Computer storage media does not comprise a propagated data signal.
Communication media typically embodies computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
Memory 1512 includes computer storage media in the form of volatile and/or nonvolatile memory. The memory 1512 may be removable, non-removable, or a combination thereof. Example memory includes solid-state memory, hard drives, optical-disc drives, etc. Computing device 1500 includes one or more processors 1514 that read data from various entities such as bus 1510, memory 1512, or I/O components 1520. Presentation component(s) 1516 present data indications to a user or other device. Example presentation components 1516 include a display device, speaker, printing component, vibrating component, etc. I/O ports 1518 allow computing device 1500 to be logically coupled to other devices, including I/O components 1520, some of which may be built in.
Illustrative I/O components include a microphone, joystick, game pad, satellite dish, scanner, printer, display device, wireless device, a controller (such as a stylus, a keyboard, and a mouse), a natural user interface (NUI), and the like. In aspects, a pen digitizer (not shown) and accompanying input instrument (also not shown but which may include, by way of example only, a pen or a stylus) are provided in order to digitally capture freehand user input. The connection between the pen digitizer and processor(s) 1514 may be direct or via a coupling utilizing a serial port, parallel port, and/or other interface and/or system bus known in the art. Furthermore, the digitizer input component may be a component separated from an output component such as a display device, or in some aspects, the usable input area of a digitizer may coexist with the display area of a display device, be integrated with the display device, or may exist as a separate device overlaying or otherwise appended to a display device. Any and all such variations, and any combination thereof, are contemplated to be within the scope of aspects of the technology described herein.
An NUI processes air gestures, voice, or other physiological inputs generated by a user. Appropriate NUI inputs may be interpreted as ink strokes for presentation in association with the computing device 1500. These requests may be transmitted to the appropriate network element for further processing. An NUI implements any combination of speech recognition, touch and stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, and touch recognition associated with displays on the computing device 1500. The computing device 1500 may be equipped with depth cameras, such as stereoscopic camera systems, infrared camera systems, RGB camera systems, and combinations of these, for gesture detection and recognition. Additionally, the computing device 1500 may be equipped with accelerometers or gyroscopes that enable detection of motion. The output of the accelerometers or gyroscopes may be provided to the display of the computing device 1500 to render immersive augmented reality or virtual reality.
A computing device may include a radio 1524. The radio 1524 transmits and receives radio communications. The computing device may be a wireless terminal adapted to receive communications and media over various wireless networks. Computing device 1500 may communicate via wireless policies, such as code division multiple access (“CDMA”), global system for mobiles (“GSM”), or time division multiple access (“TDMA”), as well as others, to communicate with other devices. The radio communications may be a short-range connection, a long-range connection, or a combination of both a short-range and a long-range wireless telecommunications connection. When we refer to “short” and “long” types of connections, we do not mean to refer to the spatial relation between two devices. Instead, we are generally referring to short range and long range as different categories, or types, of connections (i.e., a primary connection and a secondary connection). A short-range connection may include a Wi-Fi® connection to a device (e.g., mobile hotspot) that provides access to a wireless communications network, such as a WLAN connection using the 802.11 protocol. A Bluetooth connection to another computing device is a second example of a short-range connection. A long-range connection may include a connection using one or more of CDMA, GPRS, GSM, TDMA, and 802.16 policies.
The technology described herein has been described in relation to particular aspects, which are intended in all respects to be illustrative rather than restrictive. While the technology described herein is susceptible to various modifications and alternative constructions, certain illustrated aspects thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the technology described herein to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the technology described herein.
This application claims the benefit of U.S. Provisional Application No. 63/255,806, filed Oct. 14, 2021, and U.S. Provisional Application No. 63/264,972, filed Dec. 6, 2021. The entirety of both applications is hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
63255806 | Oct 2021 | US | |
63264972 | Dec 2021 | US |