This disclosure relates to automatic generation of learning activities for use in an educational environment, and more specifically, to a system and a method configured to enable a teacher, using only minimal inputs, to automatically generate a learning activity for one or more students.
Computer-aided assessment tests are widely used in a variety of educational or aptitude settings, such as primary and secondary schools, universities, standardized or aptitude tests (e.g., GRE, MCAT, GMAT, state achievement exams, etc.), entrance examinations, and online training courses. For educational settings, computer-aided tests may be employed in both traditional, in-classroom environments and/or remote, networked out-of-classroom settings. For example, a full-time worker requiring flexibility may enroll in an online program with a webbased educational institution and may exclusively conduct all his or her exams via computer-based tests. As another example, traditional educational institutions, and in particular, elementary education systems, are increasingly employing in-class computer-based tests and other individual and group learning activities with their students. Generally, computer-based activities and tests lower the costs of teaching by automating the evaluation of each student's exam and by liberating a teacher's time grading exams. However, the teacher is still required to manually create computer-based tests for his or her students despite saving time in grading the tests.
One conventional technique for creating a computer-based activity or test involves a teacher manually formulating a computer-based test by writing his or her own questions and entering the questions into the computer. Although this task is an easy method of creating a computer-based activity or test, it quickly becomes time consuming and difficult to create multiple computer-based activities or tests for different subjects or grade levels or to edit existing activities or tests. Another conventional technique for creating a computer-based activity or test includes utilizing a repository of previously entered activity test material, content, or test questions. In this case, the teacher or a third party entity must diligently draft each question or test material item that is to be stored in the repository; then the teacher may choose questions or material residing in the repository to manually create a computer-based activity or test. While creating an activity or test more quickly than writing each question from scratch, the teacher still is required to choose each question or instructional item manually. Furthermore, this technique may not perform well in all settings, especially when the content or test material in the repository must be frequently changed or updated. This technique is particularly tedious and time consuming with the inclusion of an extremely large repository, such as the online aggregate website, Multimedia Educational Resource for Learning and Online Teaching (MERLOT). In that case, the teacher must painstakingly sift through vast amounts of test material, choose the test material closest to the teacher's lesson plan, and then typically modify the material to suit the students' needs. Likewise, this technique is also inadequate with a small repository because of the insufficient depth in the number of questions from which to select.
An educational activity system, according to one example embodiment, allows a teacher user to specify activity parameters that define an activity for one or more students to complete on a computer or a mobile device, uses the activity parameters to determine appropriate subject matter from a content asset database, generates an activity incorporating the determined appropriate subject matter, evaluates generated activities for correctness after a student has completed the activity, and stores the results of each student in a student performance database. To create an activity, the activity editor retrieves all subject, grade level, and activity template data from a knowledge database and displays the subject, grade level, and activity template data to the teacher user. The teacher user selects the appropriate subject, grade level, and activity template data that the system will use in creating an activity. Using the teacher user selected data, the activity editor retrieves applicable topic data in the knowledge database for use in creating the activity and displays the topic information to the teacher user. The teacher user specifies the appropriate topic data for use in the activity. The activity editor retrieves all appropriate categories from the knowledge database that correspond to the teacher user selected topic and displays the category information to the teacher user. The teacher user selects the desired categories, and the activity editor retrieves all items associated with the teacher user specified categories from an asset database and randomly displays a portion of the items to the teacher user at a preview layout activity creation stage. At the preview layout stage, the teacher user may customize each specific value by determining whether to include or to omit particular items in the activity for the one or more students. The activity editor stores the created activity in an activity database. When an authorized student user requests to perform the activity, an inference engine retrieves and displays the activity to the student user. According to some embodiments, after recording the student user's selections or responses to the activity, the inference engine may be further employed to evaluate the activity for correctness and store the results in a student performance database for later retrieval by the teacher user, or for automatic generation of subsequent activities, with modification of level of difficulty according to student performance on the completed activity. To perform automatic generation of subsequent activities, the inference engine may maintain initial values for each of the teacher-specified subject, grade level, and activity template data, regenerate a new filtered set of test items based on these maintained initial values, and recreate a new electronic activity using the regenerated filtered set of test items.
Although the following text sets forth a detailed description of numerous different embodiments, it should be understood that the legal scope of the description is defined by the words of the claims set forth at the end of this disclosure. The detailed description is to be construed as exemplary only and does not describe every possible embodiment since describing every possible embodiment would be impractical, if not impossible. Numerous alternative embodiments could be implemented, using either current technology or technology developed after the earliest effective filing date of this patent, which would still fall within the scope of the claims.
It should also be understood that, unless a term is expressly defined in this patent using the sentence “As used herein, the term ‘______’ is hereby defined to mean . . . ” or a similar sentence, there is no intent to limit the meaning of that term, either expressly or by implication, beyond its plain or ordinary meaning, and such term should not be interpreted to be limited in scope based on any statement made in any section of this patent (other than the language of the claims). To the extent that any term recited in the claims at the end of this patent is referred to in this patent in a manner consistent with a single meaning, that is done for sake of clarity only so as to not confuse the reader, and it is not intended that such claim term by limited, by implication or otherwise, to that single meaning. Finally, unless a claim element is defined by reciting the word “means” and a function without the recital of any structure, it is not intended that the scope of any claim element be interpreted based on the application of 35 U.S.C. §112, sixth paragraph.
The test material database editing system 100 includes a server 103 that is connected to a administrator client 115 through a communication network 125. The asset database 107 is connected to or is disposed within the server 103 and stores test content data, or asset data, of any type, including for example, pictures, images, diagrams, illustrations, silhouetted images, words, phrases, sentences, paragraphs, sounds, music, animation, videos, dynamic objects (e.g., a multimedia platform), and lessons. Generally speaking, the data stored in the asset database 107 may be any data that is presented to a student while performing an activity and/or available for selection and incorporation into an activity by a teacher user. The knowledge database 105 is in communication with or is disposed within the server 103 and stores relational data of any type, including for example concepts, attributes, relationships, and taxonomical information. In general, the relational data stored in the knowledge database 105 may be of any data that adds context or relational knowledge to the asset data in the asset database 107 (discussed below) and can be structured using any manner or technique.
The administrator client 115 stores an asset editor 120 and knowledge editor 122 and may include a user interface 152. The asset editor 120 communicates with the asset database 103 via a network interface 136 and operates to enable a user to create, to add, to delete, or to edit asset in the asset database 107. Similarly, the knowledge editor 122 communicates with the knowledge database 105 via the network interface 136 and operates to enable a teacher user to create, to add, to delete, or to edit relational data in the knowledge database 105. As illustrated in
The communication networks 125 and 127 may include, but are not limited to, any combination of a LAN, a MAN, a WAN, a mobile, a wired or wireless network, a private network, or a virtual private network. Moreover, while the communication networks 125 and 127 are illustrated separately in
As indicated above, the asset database 107, which may be stored in or may be separate from the server 103, may contain any type of test content data and is stored as data objects or asset data. Generally, asset data may be stored in any form of media, such as visual, or auditory media, and in any format (as discussed above). Any information associated with a particular asset data, such as metadata, keywords, tags, or hierarchical structure information, may also be stored together with the particular asset data. For example, a particular asset data in the asset database 107 may include an image depicting a bear eating a fish from a river in a forest. In this example, the keywords or tags might include “bear”, “fish”, “forest” and/or “bear eating fish.” These keywords or tags are stored together with the image in the asset database 107 as associated information to the image. Tags or keywords link asset data (e.g., an image) to facts or concepts contained within the asset data (e.g., “bear”, “fish”, “forest”). By tagging asset data with facts or concepts, the asset data is easily linked or integrated with the relational data in the knowledge database 105.
In addition to storing asset data, the asset database 107 may also store one or more template types that define the tasks or goals of an activity. Of course, template types may be stored in the knowledge database 105, the activity database 111, the activity editor 142, or any other suitable location. For example, a template type may be chart template that includes three columns and a selection area of test items that area selected by a teacher user or determined by the inference engine 109 (discussed in more detail below and in
As indicated above, the knowledge database 105, which may be stored in or may be separate from the server 103, may contain any type of relational data that links facts and concepts in a network of complex relationships. As discussed above, this relational data may include, for example, concepts, facts, attributes, relationships, or taxonomical information. For example, all relational data (i.e. any data that relates one item of data to another item of data) may be generally classified as a characteristic of an item of factual data. Relational data may describe, link, associate, classify, attribute, give sequence to, or negate the item of factual data to different relational data or another item of factual data. While this relational data may be stored in the knowledge database 105 in any number of ways, manners, or schemas, the Entity-Attribute-Value (EAV) modeling technique is well suited in organizing relational concepts. In other words, the EAV model expresses concepts in a three-part relationship element that defines 1. an entity's 2. relationship to 3. another entity or value (i.e. a common format includes [1. entity, 2. relationship/attribute, 3. another entity/value]). For example, a relational data element might include the conceptual relationship of “a bear is a mammal”, or as it may be alternatively stored as an entry in the knowledge database 105, [bear, isa, mammal]. Another example entry may include the entry, “mammals have hair” or [mammal, skin cover, hair]. The following chart lists (but is not limited to) a series of examples of other EAV model or relational data elements:
In utilizing these EAV model elements, the inference engine 109 is capable of linking identical sub-elements of two relational data elements together so that new relationships dynamically emerge via deduction and can be automatically generated by the inference engine 109 as further described herein. In utilizing the EAV model, the inference engine 109 is capable of using the complex relationships that are dynamically created with each EAV relational data element entry into the knowledge database 105. Opposed to a fixed, simple hierarchical-designed data structure, the EAV model allows for the linking of different entities and values via attributes. In returning to the example above, the inference engine 109 may use the relational data entry [bear, isa, mammal] and relational data entry [mammal, skin cover, hair] to deduce “a bear has hair” or [bear, skin cover, hair] via linking identical sub-elements. This deduction would not be possible in a simple, hierarchal-designed data structure due to the rigidity of a hierarchy data structure. To implement this EAV model, for example, the inference engine 109 first stores all relational data entries within the knowledge database 105 into memory 140 at runtime and deduces new relationships among the stored relational data entries. In this example, the inference engine 109 infers a new relationship, “a bear has hair,” from the two relational data entries, “a bear is a mammal” and “mammals have hair,” and uses the new relationship when generating new activities. In other words, a sub-element may inherit the attributes and values of another sub-element in the process of deduction. In the example above, the sub-element “bear” inherits the all the same attributes (“skin cover”) and values (“hair”) as another sub-element (“mammal”) through the inferring of the inference engine 109. Through this hierarchical linking and inheritance structure of the EAV model, the inference engine 109 may dynamically determine topics and the respective categories. Examples of topics or attributes may include animal classification, skin cover, reproduction, capital cities, habitat, etc. Example categories for a specific topic, for instance skin covers, may include fur, scales, feathers, etc. Of course, topics and categories may be interchangeable and previous listed examples are not intended to limit the relationship or defining characteristics between entities. As seen from the chart above, the relationship between the entity and an another entity (or value) may be defined by an variety of attributes that may characterize a specific property or a specific value to the entity.
More generally, the different types of attributes may include classification attributes, descriptor attributes, relational attributes, sequential attributes, equivalent attributes, negative attribute, etc. A relational data element that includes a classification attribute type may result in an entity inheriting attribute values associated with an attribute value directly associated with the respective entity by way of the classification attribute. For example, a relational data element entry with the properties, [bear, isa, mammal], results in the entity (bear) inheriting (isa) the classification or properties of the value (mammal) by way of being associated together in the relational data element. Another attribute type may include a descriptor attribute type that may define one or more descriptions of an entity by a corresponding attribute value. As a result, entities from multiple relational data elements having common descriptor attribute types and corresponding attribute values are determined to be related. For instance, a relational data element entry with the properties, [bear, food source, salmon], results in the entity (bear) being defined as including the value (salmon) as a food source. Additional examples of descriptor attribute types include habitat type, reproduction type, number of legs type, locomotion type, capital city type, etc. An additional attribute type includes a relational attribute type that may define how an entity relates to an attribute value. As shown in the chart, the relational data element, [rain, prerequisite, clouds], relates the entity (rain) to the value (clouds) via a prerequisite requirement that clouds be must present for rain to exist. The sequential attribute type may define a sequential relationship between an attribute value and an entity of a relational data element. For example, the relational data element, [tadpole, precedes, frog], defines the sequential relationship between an entity (tadpole) and a value (frog) so that a tadpole must always occur before a frog. The equivalent attribute type may indicate that an attribute value and an entity are equivalents of each other. The example relational data element, [orca, aka, killer whale], equivocates the entity (orca) with the value (killer whale) so that the inference engine 109 treats the entity and value exactly same. The negative attribute type may indicate that an attribute value is not associated with an entity despite potentially other inheritances. For example, the relational data element, [platypus, !reproduction, live young], indicates that the entity (platypus) does not inherit a specific value (live young) despite other relational data elements, [platypus, isa, mammal] (a platypus being a mammal) and [mammals, reproduction, live young] (mammals give birth to live young) that would indicate an inheritance of those properties.
In addition, each relational data element may also include a grade level tag that indicates the age or the grade level appropriateness of the test material. In other words, the grade level tag may also be considered a difficulty level tag in denoting the level of difficulty of the relational data element. This grade level tag may be associated with a relational data element or one sub-element of a relational data element, and as such, the term “grade level” may generally mean level of difficulty and is not necessarily tied to an academic grade or other classification. For example, [bear, isa, mammal] may be associated with a grade level of 2, an age level of 8, grade range of K-2, or age range of 6-8, while the sub-element [bear] may be associated only with a grade level of 1, an age level of 6. In this manner, the inference engine 109 may only retrieve age level, grade level, age range, or grade range appropriate relational data from the knowledge database 105 by inspecting the grade level tag associated with the relational data.
During operation, the inference engine system 101 communicates with the test material database editing system 100 through the communicative coupling of the inference engine 109 and the server 103. First of all, this communicative coupling allows the inference engine 109 to retrieve knowledge data from the knowledge database 105 for use in inferring and determining appropriate test material for a specific activity. Moreover, this communicative coupling allows the inference engine 109 to retrieve asset data from the asset database 107 for displaying content within an activity to the user. This communicative coupling may also permit the server 103 to send an update message that makes the inference engine 109 aware of an update made to data stored within the asset database 107 or knowledge database 105 so that the inference engine 109 may alert the teacher client 130 that new test material is available.
In a general example scenario, a teacher user may wish to create an activity that tests a particular subject and specific grade level for one or more students. Moreover, the teacher user may also want to specify a template or a format for the activity that is most suitable for the students who will be performing the activity. To do so, the teacher user interfaces with the activity editor 142 via a user interface 134. The activity editor 142 sends a request to the inference engine 109 to display all or a subset of available subjects, grade levels, and activity templates. It is appreciated that in other embodiments, a teach user may not select a subject, grade level, and activity template always, but instead may only select one or two of those options, such as a grade level and subject (while, for example, an activity template is selected automatically), or only a subject, for example. Similarly, in other embodiments, a teacher user may select multiple different values for one or more of the grade level, subject, and/or templates (or any other selection described herein), which may allow for a more varied activity to be generated and/or allow narrowing the multiple choices by the inference engine 109 logic. In response to the request from the activity editor 142, the inference engine 109 retrieves all or a subset of subject data, grade level data, and template types from the knowledge database 105 and conveys the subject data, grade level data, and template types to the activity editor 142 for display to the teacher user in selecting an appropriate subject and grade level to be associated with the activity. The teacher user specifies one or more of the desired subject, grade level, and template type for the activity via the user interface 134, and the activity editor 142 communicates the selected subject, grade level, and template type to the inference engine 109 and requests at least a subset of the topic data that is associated with the specified subject and grade level. The inference engine 109 stores the template type associated the activity in the activity database 111 for later use in the preview layout stage. In response to a request for topic data associated with the specified subject and grade level, the inference engine 109 retrieves all topic data associated with the selected subject and grade level from the knowledge database 105 and relays at least a subset of the topic data to the activity editor 142 to display to the teacher user.
The teacher user chooses the desired topic (or a combination of topics, in other examples) for the activity via the user interface 134, and the activity editor 142 communicates the specified topic to the inference engine 109. In response to the request from the activity editor 142, the inference engine 109 retrieves all or a subset of category data from the knowledge database 105 that associated the topic specified by the teacher user and relays the retrieved category data to the activity editor 142 to display to the teacher user. The teacher user selects one or more categories via the user interface 134, and the activity editor 142 conveys a request to the inference engine 109 to display all or a subset of items associated with the one or more selected categories. In response to the request of the activity editor 142, the inference engine 109 retrieves all or a subset of item data associated with the specified one or more categories from the asset database 107 and relays the retrieved item data to the activity editor 142 to display to the teacher user in a preview layout stage. In the preview layout stage, the activity editor 142 displays all the received items in a library section and randomly pre-populates a portion of the items in the library in a choice pool area. Items randomly displayed in the choice pool are proposed to be included in the activity for the one or more students. At this preview layout stage, the teacher user may wish to include additional items from the displayed library in the choice pool or may wish to remove items that are pre-populated by the inference engine 109 from the choice pool. The teacher user may include additional items or may remove pre-populated items via the user interface 134. When the teacher user is satisfied with the items residing in the choice pool and wishes to create the activity, the activity editor 142 communicates the selected items in the choice pool to the inference engine 109 and requests (signals) that the inference engine 109 create the activity. In other embodiments, the teacher user may not be given the choice to modify item data. In response to the request from the activity editor 142, the inference engine 109 stores the selected item data from the choice pool received from the activity editor 142 within the activity database 111 in conjunction with the previously selected template type. Together with the selected item data and template type data, the inference engine 109 may also store additional activity data in the activity database 111, such as information associated with the activity which may include the teacher user's information, and activity creation date.
It is appreciated that, according to other embodiments, some or all of the activity creation and selection operation may not be performed by a teacher but may instead be performed by an administrator or third-party service provider. For example, in a third-party service provider model where multiple activities are pregenerated and provided (sold, licensed, hosted, etc.) to a teaching institution already configured and ready to be utilized, instead of a teacher generating the activities (and making some or all of the subject, grade level, template, topic, category, item choices), these may be performed by a third-party administrator. It is therefore appreciated that, in some embodiments, some or all of the actions described as being performed by a teacher user may be performed by another party.
Thereafter, an authorized student user may request the activity from the inference engine 109 via a user interface 150. In response to the request, the inference engine 109 retrieves the stored activity that is associated with the student user from the activity database 111 and relays the activity to the student client 132 to display to the student user. It is appreciated that, according to other embodiments, an activity may be generated for printing a hard copy, allowing a student to complete the activity on paper and without a computer or other student client 132 device. According to one embodiment, as the student user performs each task or question of the activity, the student user's response is transmitted as task result data to the inference engine 109 for evaluation. In response to the request from the student client 132, the inference engine 109 evaluates the task result data for correctness, generates corresponding evaluation data, stores the result data and the evaluation data as student performance data associated with the student user in the student performance database 113, and sends the evaluation data to the student client 132 to display to the student user for immediate feedback. At any time, the teacher user may request the task result data and the evaluation data of the particular student from the inference engine 109 via the user interface 134 of the teacher client 130. In response to the request, the inference engine 109 may retrieve the task result data and the evaluation data associated with the particular student from the student performance database 113 and relay the task result data and the evaluation data to the teacher client 130 to display to the teacher user.
Of course, the activity data stored in the activity database 111 can be created or accessed by multiple activity editors 142 (other activity editors not shown), can be modified, and can be stored back into the activity database 111 at various different times to create and modify activities. As will be understood, the activity database 111 does not need to be physically located within inference engine 109. For example, the activity database 111 can be placed within a teacher client 130, can be stored in external storage attached to the inference engine 109, can be stored within server 103, or can be stored in a network attached storage. Additionally, there may be multiple inference engines 109 that connect to a single activity database 111. Likewise, the activity database 111 may be stored in multiple different or separate physical data storage devices. Furthermore, the inference engine 109 does not need to be directly connected to the server 105. For example, the inference engine 109 can be placed within a teacher client 130 or can be stored within the server 105. Similarly, the student performance data stored in the student performance database 113 may be accessed by multiple activity editors 142, can be modified, and can be stored back into the student performance database 113 at various different times to modify student performance data, if necessary. The student performance database 113 need not be located in the inference engine 109, but for example, can be placed within a teacher client 130, can be stored in external storage attached to the inference engine 109, can be stored within server 103, or can be stored in a network attached storage. Additionally, there may be multiple inference engines 109 that connect to a single student performance database 113. Likewise, the student performance database 113 may be stored in multiple different or separate physical data storage devices.
Of course, some embodiments of the activity editor 142 and the inference engine 109 may have different and/or other modules than the ones described herein. Similarly, the functions described herein can be distributed among the modules in accordance with other embodiments in a different manner than that described herein. However, one possible operation of these modules is explained below with reference to
More particularly, at a step or a block 405, the inference engine interface module 205 within activity editor 142 operates to present all available subjects, grade levels, and templates to the teacher user via the user interface 134. The inference engine interface module 205 will use the knowledge database interface module 305 within the inference engine 109 to access the knowledge database 105 within the server 103 to obtain the relational data needed for display. The displayed subjects, grade levels, and templates may be rendered in text, images, icons, or any other suitable type of data representation. It is appreciated that, according to some embodiments, only a subset of the subjects, grade levels, and templates may be presented. Similarly, in some embodiments, a teacher user may not be presented the option to select each of the subject, grade level, or template options, but instead these may default to predetermined values (e.g., if set in preferences based on teacher user grade taught, teacher user subject matter taught, or template preferences) or some or all selections may be generated randomly (e.g., random template generation). It is further appreciated that, in some embodiments, user preferences may be specified and customizable at different levels of control and association, such as different template preferences for different grade levels, subjects, etc.
At a block 410, the activity selection module 210 enables a teacher user to highlight or select the desired subject, grade level, and template type via the user interface 134 to thereby define one or more subjects, the one or more grade levels, and the template type to be associated with a particular activity. In one example illustrated in
Once the teacher user indicates or otherwise selects one or more subjects, one or more grade levels, and a template type for the activity, a block 415 of
Returning to
The returned topics in this example include food sources, habitats, locomotion, abilities, body parts, and geographic regions. The returned topics may also include mammals, living organisms, legged animals, and omnivores from the “isa” attribute (e.g., a bear is a mammal) which represents an inherited property.
As illustrated in
Referring back to
As a result and as illustrated in
Referring back to the
Generally speaking, each test item in the set of returned test items is associated with test item data that includes one or more characteristics. Each returned test item is related to at least one of the other returned test items in the set of test items via test item data (of each respective test item) that share one or more common characteristics. In other words, each returned test item of the plurality of related test items is associated with at least one test item data that includes one or more characteristics, and the relationship between one test item of the plurality of related test items and another test item of the plurality of related test items is determined by one or more common characteristics of at least one test item data of the one test item and of at least one test item data of the other test item. Moreover, these one or more characteristics can be attributes, attribute values, or attribute value pairs. In employing the EAV model specifically, attributes may define topics (e.g., animal classifications), attribute values may define the value of a respective entity or test item data (e.g., animal), and the attribute value pairs may define categories (e.g., mammals). In addition to being directly related via one or more common characteristics, the returned test items may also inherit one or more characteristics from the test item data of other returned test items of the plurality of related test items.
More specifically, the relationship between two test items in the plurality of related test item data may further be defined by one test item (e.g., mammal) that has a test item data (e.g., isa animal) associated with a characteristic (e.g., animal) and another test item (e.g., legged animal) that has a test item data (e.g., isa animal) that shares the same common characteristic (e.g., animal). Moreover, the common characteristic (e.g., animal) may also be an additional test item data of an additional test item, and the additional test item data (e.g., animal) may have one or more additional characteristics (e.g., isa living organism) that may also serve as different test item data of different test items. This structure, some instances, may lead to the original two test items (e.g., mammal and legged animal) inheriting each of the one or more additional characteristics (e.g., isa living organism) as a result of their association with the common and shared characteristic (e.g., animal). Thus, the inference engine 109, in this example, deduces that “a mammal is a living organism” and that “a legged animal is a living organism” from the stored facts that “a mammal is an animal,” “a legged animal is an animal,” and “an animal is a living organism” via the common characteristic of “animal.” This deduction may also be extended to new test items that have test item data (e.g., plant) associated with the common one or more characteristics (e.g., isa living organism) that were inherited by the original two test items (e.g., mammal and animal) so that the new test item (e.g., plant) becomes related to the original two test items (e.g., mammal and animal).
Each characteristic of the one or more characteristics of a test item data of a test item may include at least two types. One type may result in the test item data inheriting additional characteristics associated with one or more characteristics directly associated with the test item data, and a second type may not result in inheritance of additional characteristics associated with one or more characteristics directly associated with the test item data. For example, “a platypus is a mammal” and “a mammal is an animal” leads to the platypus inheriting characteristic of being an animal; however, the additional relational data element, “mammals give live birth” would not, in this instance, lead to the platypus inheriting the characteristic of giving live birth. Additionally, a topic or a category includes either an attribute or an attribute value that is associated with one or more test item data, and upon the selection of the topic or the category, the inference engine 109 returns a plurality of related test items associated with the one or more test item data either directly associated with or inheriting the selected topic, category, or attribute value.
In any event, the returned set of test items may retain their respective tags so that the items may be sorted at a later time. The asset database interface module 310 communicates these returned test items to the activity editor 142 for display. Referring back to
Once the teacher user is satisfied with the items to be tested that are residing in the choice pool area 810, at a block 440, the teacher user may indicate or otherwise select the creation of the activity. At a block 445, the activity creation module 315 creates the activity by associating each item in the choice pool area 810 and the selected template from the block 410 with the activity and stores the activity and associated data in the activity database 111.
Referring back to
Referring back to
In
Referring back to
Certain embodiments are described herein as including logic or a number of components, modules, or mechanisms. Modules may constitute either software modules (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware modules. A hardware module is tangible unit capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein.
In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, may comprise processor-implemented modules.
Similarly, the methods or routines described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented hardware modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations.
The one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., application program interfaces (APIs)).
The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, a school environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations.
Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information.
Still further, the figures depict preferred embodiments of an inference engine system for purposes of illustration only. One skilled in the art will readily recognize from the foregoing discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein. Thus, upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for a system and a process for generating electronic activities through the disclosed principles herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined in the appended claims.
Number | Date | Country | |
---|---|---|---|
61577397 | Dec 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13595534 | Aug 2012 | US |
Child | 15149028 | US |