The invention relates to training, and more particularly, to use a virtual apparatus via a computer animation executed by a computer environment without needing language interaction.
In the 1970's Dr. Gordon Shaw, a founder of the M.I.N.D. Institute, pioneered the area of spatial temporal reasoning and noted that American education had historically overly focused on the LA (Language-Analytical) portion of human reasoning, and substantially overlooked ST (Spatial-Temporal) reasoning as a learning mode. Since then, our understanding of how the brain works, and how learning takes place has increased. Gaining this understanding is an ongoing dynamic process.
Unfortunately, while a doctor from the mid 19th century would be lost, and a menace, both to himself, and to patients, in a 2005 United States hospital, a school teacher from 1850 would likely feel at home in today's classrooms, once she recovered from fainting at the girl's customary attire. While the world has dramatically changed in the last 150 years, teaching methods have not kept up. Generally, computing power is relegated to an adaptive and adjunct role, rather than to develop new learning paradigms. This state of events is unsatisfactory, and cannot be tolerated.
Admittedly, education systems face daunting challenges. Cultural and linguistic homogeneity have sharply decreased. In the United States, teachers can no longer presume that all students speak English. In some school districts, numerous languages are spoken at children's homes. The historical demand that children acclimate to America, and learn only English, has diminished. Many students lack effective family support systems. Television, video games, and computer games, compete with homework for students after school attention, and the federal government, through partially or totally unfunded mandates, requires schools to do more, with fewer resources.
Additionally, early childhood education, education of those for who English is a second language, and educating the learning disabled are growth areas. While preschool children may, or may not, understand spoken English, there is no reason to presume that they read English. Additionally, many people, regardless of age, are illiterate. As the supply of good jobs for those having little formal education dries up, the need for creative educational paradigms increases.
While infrastructure is becoming available, this alone cannot solve the problem. Computers offer wondrous opportunities to expand learning, but a paradigm to generate a quantum leap in educational results is lacking. True, students can now do on-line research, and create computerized presentations for class instead of typed or handwritten reports. They can search through billions of references by using Internet search programs. However, this is merely incrementally improving an existing educational process many view as grossly inadequate. Regardless, we have not addressed how to help students reason and think. As previously mentioned the language gap in schooling is a continuing problem.
Using manipulatives in teaching mathematics began at least in the 1980's, and possibly well before. This accesses different reasoning pathways, (Spatial-Temporal) than does dealing merely with abstractions. Virtual manipulatives have been created as computer-generated images. However, these manipulatives have merely been used as an adjunct or supplement to spoken or textual language, rather than a replacement therefore. Further, virtual manipulative lack important aspects of real world objects.
In the real world, objects are subject to physical laws learned even by toddlers. For example, generally an object, when released, neither hovers, nor rises, but falls to the ground. An object at rest remains so unless acted upon. A hole is something to fall into. Free objects above the ground fall. In general, objects follow predictable rules. While a virtual manipulative looks like the real thing, it need not behave so, unless placed in a logical construct where it is constrained. We define such a constrained virtual manipulative as a virtual apparatus.
Traditionally, students learn subjects via the written or spoken word. By definition, language translates the concrete into the abstract. Therefore, in all such learning, language ability is a choke point or limiting factor. Such a limiting factor is often unnecessary. Consider the problem, 2+2=4. Placing two blocks next to two blocks provides four blocks, without any resort to language or symbolism. Imagine a construct where selecting two additional blocks creates a successful result.
While bridging the language gap at some point likely needs to be done, no a priori reason exists for all instruction to be language based. A chemistry or physics experiment is NOT language based, but has real world results. Similar geometric experiments exist. When imparting learning from a subject, or discipline, conveyable primarily or purely through physical means, such as virtual apparati, introducing language abstractions in the learning process is an unnecessary complicating, and hindering factor.
Originally, counting was by pebbles, twigs, or other physical means. Additionally, subtraction, algebra, story problems, geometry, and many other subjects can be taught without using language. This is beneficial for various reasons. First, as the problem is inherently physical, there is no underlying reason to convert the problem from the physical realm into the abstract realm. Secondly, if the problem remains in the physical realm, the realm of spatial temporal reasoning, rather than the realm of language—analytical reasoning, then language skills are no longer either a limiting or a complicating factor. Third, if the problem is language independent, the students' English skills are irrelevant.
Imagine a classroom where 25 children who speak seven different languages can solve the same problem from the same computer program. A single version of a single problem is comprehensible and solvable to speakers of any language, or none at all. The same logical rules would be applicable to each subject. Reasoning, logic, math, science, and more, all taught without need for textual symbols.
Admittedly, a language transition would be required to transfer the learning skills from this logical construct into the broader logic of the physical world. However, this process is primarily relabling previously learned facts and thought processes, and adapting thought processes to a broader perspective.
A need exists for a paradigm for teaching subjects using physical representations, such as a virtual apparatus, where using language is unnecessary, and then addressing the language gap after substantial learning occurs. Likewise, a need exists for methods of teaching processes that do not rely on unnecessary single or double translations between concrete and abstract thought processes. Further, a need exists for software that enables this new educational method.
Certain embodiments include a computer based learning method, and implementing software that implements a new learning paradigm. This method of learning/method of teaching implements advances in the understanding of spatial temporal (ST) reasoning to allow students to learn without relying on language or language analytical (LA) reasoning.
Accordingly, this method applies to any subject matter where ST reasoning applies. This method is particularly applicable for use with students having learning disabilities, limited language skills, or limited English skills in English based learning environments.
In one embodiment of the invention there is a method of training in conjunction with a computer animation executed by a computing environment, the method comprising creating a logical construct including establishing a set of consistent rules, providing a virtual apparatus with virtual components that are manipulated by a user being trained, teaching the user to understand logically consistent goals by using the virtual apparatus and without language interaction, modeling properties and the set of consistent rules for a subject area to the user by the virtual apparatus without language interaction, teaching the user to operate the virtual apparatus without language interaction, requesting the user to operate the virtual apparatus to reach goals in a sequence of problems having progressive difficulty and without language interaction.
The method may additionally comprise selecting an appropriate subject matter area, and selecting a sequence of problems according to the selected subject matter area. The method may additionally comprise establishing error tolerances for the problems. The method may additionally comprise determining if the user has mastered a subject area sub-category. The method may additionally comprise introducing language elements to the user in conjunction with the computer animation if the user has mastered a subject area sub-category. The language elements may comprise numerals and/or labels.
In another embodiment of the invention there is a method of training in conjunction with a computer animation executed by a computing environment, the method comprising recognizing a task to be performed in the computer animation, recognizing a problem, displayed in the computer animation, to accomplish the task, and solving the problem by manipulation of a virtual apparatus executing in the computing environment.
The method may additionally comprise animating positive feedback to a user if the problem was solved correctly. Animating positive feedback may comprise showing the user why the problem was solved correctly. The method may additionally comprise introducing language elements to the user in conjunction with the computer animation. The language elements may comprise numerals and/or labels. The language elements may be added to the display near the virtual apparatus. The language elements may be added to the display in place of portions of the virtual apparatus. The method may additionally comprise solving a new problem in a language-analytic environment wherein the new problem and the solved problem are related to a concept being taught. The method may additionally comprise animating negative feedback to a user if the problem was not solved correctly. Animating negative feedback may comprise showing the user why the problem was not solved correctly.
The method of training may be self-contained. Solving the problem may utilize spatial temporal reasoning. The method may additionally comprise designating a task to be performed in the computer animation. The computer animation may include only essential images. The positive feedback computer animation may include only essential output. The negative feedback computer animation may include only essential output.
In another embodiment of the invention there is a method of training in conjunction with a computer animation executed by a computing environment, the method comprising recognizing a task to be performed in the computer animation, recognizing a problem, displayed in the computer animation, to accomplish the task, and solving the problem using a virtual apparatus with virtual components displayed in the computer animation. The method of training may be without intervention and instruction by a teacher.
In another embodiment of the invention there is a system for training, the system comprising a computing environment executing a computer animation, means for recognizing a task to be performed in the computer animation, means for recognizing a problem, displayed in the computer animation, to accomplish the task, and a virtual apparatus with virtual components displayed in the computer animation for solving the problem.
In another embodiment of the invention there is a computer readable medium containing software that, when executed, causes the computer to perform the acts of recognizing a task to be performed in a computer animation, recognizing a problem, displayed in the computer animation, to accomplish the task, and solving the problem using a virtual apparatus with virtual components displayed in the computer animation.
In another embodiment of the invention there is a method of training, the method comprising providing a virtual apparatus with virtual components that are manipulated by a user being trained, and teaching the user to understand logically consistent goals by using the virtual apparatus and without language interaction.
In another embodiment of the invention there is a computer readable medium containing software that, when executed, causes the computer to perform the acts of providing a virtual apparatus with virtual components that are manipulated by a user being trained, and teaching the user to understand logically consistent goals by using the virtual apparatus and without language interaction.
In another embodiment of the invention there is a method of training, the method comprising providing a virtual apparatus with virtual components that are manipulated by a user being trained, and modeling rules and properties for a subject area to the user by the virtual apparatus without language interaction.
In another embodiment of the invention there is a computer readable medium containing software that, when executed, causes the computer to perform the acts of providing a virtual apparatus with virtual components that are manipulated by a user being trained, and modeling rules and properties for a subject area to the user by the virtual apparatus without language interaction.
In another embodiment of the invention there is a method of training, the method comprising providing a virtual apparatus with virtual components that are manipulated by a user being trained; and teaching the user to operate the virtual apparatus without language interaction.
In another embodiment of the invention there is a computer readable medium containing software that, when executed, causes the computer to perform the acts of providing a virtual apparatus with virtual components that are manipulated by a user being trained, and teaching the user to operate the virtual apparatus without language interaction.
In another embodiment of the invention there is a method of training, the method comprising providing a virtual apparatus with virtual components that are manipulated by a user being trained, and requesting the user to operate the virtual apparatus to reach goals in a sequence of problems having progressive difficulty and without language interaction.
In yet another embodiment of the invention there is a computer readable medium containing software that, when executed, causes the computer to perform the acts of providing a virtual apparatus with virtual components that are manipulated by a user being trained, and requesting the user to operate the virtual apparatus to reach goals in a sequence of problems having progressive difficulty and without language interaction.
The following detailed description of certain embodiments presents various descriptions of specific embodiments of the invention. However, the invention can be embodied in a multitude of different ways as defined and covered by the claims. In this description, reference is made to the drawings wherein like parts are designated with like numerals throughout.
The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner, simply because it is being utilized in conjunction with a detailed description of certain specific embodiments of the invention. Furthermore, embodiments of the invention may include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing the inventions herein described.
The system is comprised of various modules, tools, and applications as discussed in detail below. As can be appreciated by one of ordinary skill in the art, each of the modules may comprise various sub-routines, procedures, definitional statements and macros. Each of the modules are typically separately compiled and linked into a single executable program. Therefore, the following description of each of the modules is used for convenience to describe the functionality of the preferred system. Thus, the processes that are undergone by each of the modules may be arbitrarily redistributed to one of the other modules, combined together in a single module, or made available in, for example, a shareable dynamic link library.
The system modules, tools, and applications may be written in any programming language such as, for example, C, C++, BASIC, Visual Basic, Pascal, Ada, Java, HTML, XML, or FORTRAN, and executed on an operating system, such as variants of Windows, Macintosh, UNIX, Linux, VxWorks, or other operating system. C, C++, BASIC, Visual Basic, Pascal, Ada, Java, HTML, XML and FORTRAN are industry standard programming languages for which many commercial compilers can be used to create executable code.
A computer or computing device may be any processor controlled device including terminal devices, such as personal computers, workstations, servers, clients, mini-computers, main-frame computers, laptop computers, a network of individual computers, mobile computers, palm-top computers, hand-held computers, set top boxes for a television, other types of web-enabled televisions, interactive kiosks, personal digital assistants, interactive or web-enabled wireless communications devices, mobile web browsers, or a combination thereof. The computers may further possess one or more input devices such as a keyboard, mouse, touch pad, joystick, pen-input-pad, and the like. The computers may also possess an output device, such as a visual display and an audio output. One or more of these computing devices may form a computing environment.
These computers may be uni-processor or multi-processor machines. Additionally, these computers may include an addressable storage medium or computer accessible medium, such as random access memory (RAM), an electronically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), hard disks, floppy disks, laser disk players, digital video devices, compact disks, video tapes, audio tapes, magnetic recording tracks, electronic networks, and other techniques to transmit or store electronic content such as, by way of example, programs and data. In one embodiment, the computers may be equipped with a network communication device such as a network interface card, a modem, or other network connection device suitable for connecting to a communication network. Furthermore, the computers execute an appropriate operating system such as Linux, UNIX, any of the versions of Microsoft Windows, Apple MacOS, IBM OS/2 or other operating system. The appropriate operating system may include a communications protocol implementation that handles all incoming and outgoing message traffic passed over the network
The computers may contain program logic, or other substrate configuration representing data and instructions, which cause the computer to operate in a specific and predefined manner, as described herein. In one embodiment, the program logic may be implemented as one or more object frameworks or modules. These modules may be configured to reside on the addressable storage medium and configured to execute on one or more processors. The modules include, but are not limited to, software or hardware components that perform certain tasks. Thus, a module may include, by way of example, components, such as, software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
The method and system applies to teaching a wide variety of disciplines and minimizes or eliminates abstractions or symbols while teaching using constrained virtual manipulatives defined as a virtual apparatus. The constraints require the virtual manipulatives to behave in accordance with easily learned rules. While symbols or abstractions may be added to the displayed puzzles, the crux of the learning is accomplished without having to resort to such abstractions. The method used by certain embodiments comprises three phases.
In one embodiment, the initial phase is a tutorial, which teaches the user how to use the implementing software. The tutorial is fully informative without using written or spoken language. One key technique is impoverishing the visual environment. Consider, for example, the old video game commonly known as PONG which comprised a monochromatic monitor, each player's virtual paddles, and a cathode-ray tube trace which functioned as a virtual ball, which was constrained by the physical constraint that the angle of incidence equaled the angle of reflection. After the user has demonstrated mastery of the tutorial, he or she may proceed to an initial subject matter puzzle.
The method involves creating single and multi-step puzzles, arranged hierarchically. The learning process involves having the user complete a non-textual training stage, and a non-textual learning stage. In one embodiment, the method progresses from non-textual puzzles to puzzles including text to bridge the language gap. The method and software are suitable for all users, as the non-textual learning puzzles and solutions are language independent.
Regardless of the academic discipline, puzzles are solved the same way. Each puzzle constitutes an impassable success path, and comes with a set of possible solutions. Each solution set, and each set member, consists of virtual apparatus. In the preferred embodiment, in accord with learning theory best practices, each solution set member depicts a logically true statement. Ideally, each solution consists of one or more virtual manipulatives. The user selects a possible solution. The software applies the selected solution to the success path. The correct solution makes the success path passable. An incorrect solution leaves the success path impassable. Preferably, user selected incorrect solutions remains visible as user resources. Preferably, additional success indicators are used, such as an animated figure with easily learned movement rules traveling the entire path. Preferably, the software displays only essential information and graphics. The method uses a computer with display, a user controlled input device, and preferably, ways for user identification, score recordation, and score analysis.
Referring to
Software 12 comprises a training component 40 and a puzzle or problem component 42. A language integration component 44 may also be provided as described below. Note that software 12 may allow a puzzle 22 to teach students 18 a part of both puzzle component 42 and language integration component 44. As shown particularly well in Example 1-3 (shown in Appendix C) and Example 1-4 (shown in Appendix D), language skills are introduced in tandem with the already learned spatial-temporal (ST) representations used to teach the applicable concepts. Therefore, student 18 does not require high level language-analytical (LA) skills to identify the previously learned ST skills in the prevalent LA environment. Although the description refers to puzzles, it also applies to any type of problem to be solved.
As also shown by examples 1-1 through 1-7 (shown in the Appendices), ST skills can teach a variety of concepts from the geometric, to fractions, multiplication, ratios, and more. Embedding text seamlessly into previously learned ST skills is believed to both maximize learning and minimize detriments caused by possible language skill deficiencies in students 18.
Training component 40 presents students 18 with puzzles 22 which teach students 18 the rules of the logical construct selected for the system 10. Mastery of training component 40 allows student access to puzzle component 42, as described below.
In certain embodiments, puzzle component 42 comprises puzzles 22 which are organized into bins and levels. Puzzles 22 teach students 18 through the utilization of ST reasoning.
Referring to
Referring to
Certain underlying counterintuitive techniques enhance the effectiveness of the system 10 (
In the system 10, an animated
The version of Seed described in Appendix H has several non-obvious differences from the prior art version. First of all, the present version is self-contained, and thus does not require intervention and/or instruction by a teacher. In the prior art versions of Big Seed, and other puzzles, teacher intervention and instruction is essential to enable the puzzles to provide a learning experience. Second, at least some phases of the current puzzle are self-contained in ST space and do not require that the student receives LA space input. Third, it has been determined that the student receives a better quality learning experience when the arena stage is impoverished.
This is counterintuitive. Video games theory over the past twenty plus years has emphasized more and more graphics, and a richer video environment to attract the user's attention. Impoverishing the video environment enhances the student's learning experience, and allows students to gain maximum benefit from ST reasoning. In an impoverished environment, only essential images or output are included in an animation of the problem and puzzle and any positive or negative feedback to the user, in one embodiment. Essential images include only those images that are directly involved in the puzzle/problem and its solution. Output related to unnecessary movement on the screen, or graphics not involved in puzzle solution are not essential. Essential images are of a minimum complexity in that they contain little or no information that is not directly related to the problem. Research was conducted on a typical puzzle, identified as “bricks.” The “bricks” example shown in Appendix I demonstrates the differences between an impoverished and non-impoverished environment in the puzzle. The advantages of the impoverished environment are typical of the puzzles utilized in the system 10.
The system has great potential for assisting a wide variety of students. Specially suited student populations include pre-school students, elementary students, learning disabled students, hearing impaired students, students with low achievement levels in language arts, as well as students lacking language skills in a particular language, even though their overall LA reasoning may be average or above.
As demonstrated in research articles, it is believed that ST reasoning based puzzles may also be used to identify those possessing gifted intelligence or genius qualities. The early identification of the gifted student is essential to maximizing the inherent capacities possessed by such learners.
The system may utilize alphanumeric identifying indicia without requiring the student to utilize LA reasoning in the puzzle solution process. One or a small series of letters or numbers may well serve to identify which alternative process is employed in a multi-process puzzle. For example “GCF” or “LCM” serve as distinctive shapes easily recognized by the student, without any requirement that the student associate the symbols with particular letters or words. Other nonsense symbols could be employed. However, as ultimate integration of ST reasoning into a primarily LA world remains an ultimate goal of the system 10, utilizing alphanumeric identifying indicia can be a valuable aid in the eventual LA integration process.
Referring to
Process 210 begins at a start state 402 and proceeds to state 404 to present an arena having an animated character, an obstacle and a virtual apparatus, in certain embodiments. Proceeding to state 406, process 210 teaches the students a simple goal. In certain embodiments, a goal is to get JiJi, an animated penguin 100 (
In one embodiment, such as where the user is using the system 10 for the first time, state 404 may present an arena having only a virtual apparatus. Advancing to state 406, this virtual apparatus only has a single object to click or manipulate which eliminates any distracting elements from the display and makes it easy for the user to understand what should be done.
At the completion of state 408, process 210 advances to a decision state to decide if a predetermined condition is met. In certain embodiments, the condition can be that the user manipulated the object correctly a preselected number of times. If the predetermined condition is not met, as determined at decision state 412, process 210 advances to state 412 where either the obstacle or the virtual apparatus is changed to present a new problem to the user. The user then manipulated an object in the virtual apparatus and process 210 continues at state 408. If the predetermined condition is met, as determined at decision state 412, process 210 returns at a state 414 where it is deemed that the user understands the goals taught by process 210.
Therefore, after the penguin walks across the screen, a new problem is presented. If the user doesn't click on the object, the penguin doesn't do anything. The reward is that the penguin walked across the screen, and a new puzzle is presented. The students want to see something new, and don't want to be stuck seeing the same screen of static objects in an impoverished environment. So they quickly become motivated to want to click on the appropriate spot to remove the obstacle and have the penguin “move on”.
After a sufficient number of puzzles of this type, process 210 increases the difficulty at state 412. Process 210 can add two objects that can be clicked on. Clicking on one of them (the “correct” one) will remove the obstacle, and clicking on the “wrong” one will be unsuccessful in removing the obstacle. The “wrong” clickable object is different from before where there wasn't any other clickable object at all. Clicking the “wrong” object will cause the virtual apparatus to start “working”, but it will fail to remove the obstacle (fail to make the path passable). In other words, process 210 shows why the object was the “wrong” thing to click on. Process 210 steadily increases the difficulty appropriately, such as having one clickable item to having two clickable items. Subsequently, process 210 could add more clickable items (where some are right and some are wrong). In one embodiment, process 210 could also make it so that the user has to click on a sequence of items, and only a correct sequence will “work”.
A simple example of the first two levels of difficulty is as follows:
The examples above of executing process 210 show how the system teaches the logically consistent goals. In certain embodiments, the goal is the get JiJi past the obstacle and on to the next problem. The condition to obtain this goal is to make the path passable. The way the user obtains this goal is by a sequence of clicks on the appropriate parts of the screen. Process 210 teaches this by starting off with very simple games as described above. Once the student understands this goal and the way to obtain this goal, the system 10 can then start teaching the subject matter, e.g., mathematics. The system does so by creating problems with the same goal (removing the obstacle so JiJi can get across the screen). In order for the user to determine the correct sequence of clicks, the user must understand the underlying mathematics that the game is trying to teach. This leads to process 220, described below.
Referring to
Beginning at a start state 502, process 220 moves to state 504 to present objects in a virtual apparatus having rules corresponding to a subject matter area. The system 10 attempts to teach the subject area by modeling the subject area with a virtual apparatus. A virtual apparatus is a graphical entity presented on the computer screen that reacts to a sequence of user mouse clicks or other way of selecting or manipulating one or more spatial locations. In the simple example described earlier, the shape hovering in the sky is the virtual apparatus. In that case, the virtual apparatus consisted of one or more shapes. Clicking on the shape causes the shape to transition over to the hole in the ground and attempt to fill the hole. The action is what the apparatus is designed to do (the “rules”).
To teach a subject area, a virtual apparatus is designed that has rules which correspond to the subject area. For example, to model addition, the system can have a virtual apparatus that has different stacks of rectangles. Clicking on a particular stack causes the planks in the stack to transition over to the holes in the ground and attempt to fill them. At state 506, process 220 animates application of rules for a selected object in the virtual apparatus in a clear manner appropriate for the subject matter area. For example, to make the user do an addition problem, the arena can have two holes. One of the holes is three planks deep, and the other hole is two planks deep. If the user clicks on a stack that has only four planks, then the planks will transition over to the first hole, and successfully fill up the three planks worth deep hole. Then the remaining plank will move over to the second hole and attempt to fill it. Unfortunately, the second hole requires two planks, but there is only one left. As a result, the second hole is not filled perfectly, and JiJi is not able to cross. In this case, the user needs to click on a stack of five planks to successfully solve the puzzle. At the completion of state 506, process 220 advances to a return state 508.
The above example illustrates a simple way to model addition with a virtual apparatus. The system 10 can model all of mathematics in this way. Several game designs for mathematics are included in the appendices.
A feature of the system is that the workings of the virtual apparatus should be visually “clear” to the user as to what the virtual apparatus does. An example of an “unclear” virtual apparatus for the addition example above is as follows:
The more “clear” way for the virtual apparatus to work is as follows:
Referring to
Beginning at a start state 602, process 230 proceeds to state 604 and presents to the user an arena having at least an animated character and an obstacle. Advancing to state 606, process 230 presents to the user a virtual apparatus having a predetermined object identified in the arena, such as by highlighting or using a pointer to identify the object. If the subject matter area is sophisticated, the virtual apparatus typically has multiple clickable regions, and can require a particular sequence of clicks to make it do the appropriate thing to remove the obstacle. If the user starts clicking randomly in the arena, it could take a long time for the user to figure out how the virtual apparatus works. Therefore, the system presents an appropriately simple problem to the user. Then, in certain embodiments, process 230 masks out all of the clickable regions of the virtual apparatus except for a first region (such as in a sequence of regions) that the user must click on in order to solve the problem. Then process 230 points out this region to the user. One way that process 230 points out the region is by highlighting it using a bright color so the user sees it clearly. Another technique is to place a hand cursor over the object to prompt the user to click on the object. Proceeding to state 608, process 230 animates the result of manipulating the identified object to show how the virtual apparatus works. Moving to a decision state 610, process 230 determines if further objects in the virtual apparatus are to be identified to the user, such as, for example if a sequence of manipulations of objects is required to make the virtual apparatus work. If further objects are to be identified, as determined at decision state 610, process 230 advances to state 612 to change the virtual apparatus to identify another predetermined object, and then shows how the virtual apparatus works at state 608, previously described. For example, process 230 can mask out all of the clickable regions except for a second region (in a sequence) that the user must click on. In essence, the system guides the user through a problem, showing the user what to click on and making it so they can only get it right by identifying the appropriate region as the only clickable region at the time. In certain embodiments, after the system guides the user through the sequence, the virtual apparatus does its function (through animation) and the user observes how it works.
After this animation, process 230 can determine that no further objects are to be identified, as determined at decision state 610. If so, process 230 can advance to optional state 614 to identify two or more predetermined objects in the virtual apparatus at the same time. For example, the system can present another problem, and unmask more clickable areas so the user has to decide on their own what to click on. The system can gradually increase the difficulty so that the user has a chance to get a feel for how to operate the virtual apparatus. Proceeding to optional state 616, process 230 animates the result of manipulating at least one of the identified objects in the virtual apparatus to show how the apparatus works. Eventually, the user knows how the apparatus works, and then must use it to solve the problems related to the subject area. Since the rules of the virtual apparatus are related to the rules of the subject area, just learning how to operate the virtual apparatus is progress towards learning the subject area.
Referring to
Beginning at a start state 702, process 240 moves to state 704 to request the user to manipulate the virtual apparatus. Proceeding to a decision state 706, process 240 determines if the user chose the correct object(s) or sequence of objects. If so, process 240 advances to state 708 to provide one or more success indicators to the user, such as via animation. Continuing at state 710, process 240 shows the user reasons why the chosen objects caused success in the problem. The appendices provide examples of states 708 and 710 for various problems. Returning to decision state 706, if the user did not choose the correct object(s) or sequence of objects, process 240 proceeds to state 712 to provide one or more lack of success indicators, such as via animation. Continuing at state 714, process 240 shows the user reasons why the chosen objects caused the lack of success in the problem.
At the completion of either state 710 or 714, process 240 checks if the problem was solved correctly. If not, process 240 continues at state 704 to request the user to manipulate the virtual apparatus as described above. In certain embodiments, the user is asked to repeat the same problem. In other embodiments, the user is asked to solve a new problem similar to the problem that was incorrectly solved. However, if the problem is correctly solved, as determined at decision state 720, process 240 advances to a decision state 722 to determine if a further problem in a sequence of problems having progressive difficulty are available. If so, process 240 continues at state 724 by selecting a new problem for the user. Proceeding to an optional state 726, process 240 can increase the difficulty in the virtual apparatus to make the problem more challenging for the user, such as, for example, by having more objects, by requiring a certain sequence of objects to be selected, and so forth. Process then continues at state 704 to request the user to manipulate the virtual apparatus, as described above. Returning to decision state 722, if a further problem in a sequence of problems having progressive difficulty is not available, such as due to all the problems being executed, process 240 moves to a return state 730.
While specific blocks, sections, devices, functions and modules may have been set forth above, a skilled technologist will realize that there are many ways to partition the system, and that there are many parts, components, modules or functions that may be substituted for those listed above.
While the above detailed description has shown, described, and pointed out the fundamental novel features of the invention as applied to various embodiments, it will be understood that various omissions and substitutions and changes in the form and details of the system illustrated may be made by those skilled in the art, without departing from the intent of the invention.
This is a continuation of copending application Ser. No. 11/218,282 filed Sep. 1, 2005, which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
3596377 | Abbey | Aug 1971 | A |
3698277 | Barra | Oct 1972 | A |
4402249 | Zankman | Sep 1983 | A |
4416182 | Wise et al. | Nov 1983 | A |
4820165 | Kanapa | Apr 1989 | A |
4867685 | Brush et al. | Sep 1989 | A |
5059127 | Lewis et al. | Oct 1991 | A |
5169342 | Steele et al. | Dec 1992 | A |
5261823 | Kurokawa | Nov 1993 | A |
5302132 | Corder | Apr 1994 | A |
5447166 | Gevins | Sep 1995 | A |
5447438 | Watanabe et al. | Sep 1995 | A |
5478240 | Cogliano | Dec 1995 | A |
5486112 | Troudet et al. | Jan 1996 | A |
5533903 | Kennedy | Jul 1996 | A |
5574238 | Mencher | Nov 1996 | A |
5584699 | Silver | Dec 1996 | A |
5590057 | Fletcher et al. | Dec 1996 | A |
5618182 | Thomas | Apr 1997 | A |
5690496 | Kennedy | Nov 1997 | A |
5727951 | Ho et al. | Mar 1998 | A |
5746605 | Kennedy | May 1998 | A |
5779486 | Ho et al. | Jul 1998 | A |
5783764 | Amar | Jul 1998 | A |
5797130 | Nelson et al. | Aug 1998 | A |
5806056 | Hekmatpour | Sep 1998 | A |
5810605 | Siefert | Sep 1998 | A |
5820386 | Sheppard | Oct 1998 | A |
5822745 | Hekmatpour | Oct 1998 | A |
5827066 | Henter | Oct 1998 | A |
5841655 | Stocking et al. | Nov 1998 | A |
5842868 | Phillips | Dec 1998 | A |
5870731 | Trif et al. | Feb 1999 | A |
5870768 | Hekmatpour | Feb 1999 | A |
5886273 | Haruyama et al. | Mar 1999 | A |
5904485 | Siefert | May 1999 | A |
5907831 | Lotvin et al. | May 1999 | A |
5934909 | Ho et al. | Aug 1999 | A |
5956040 | Asano et al. | Sep 1999 | A |
5957699 | Peterson et al. | Sep 1999 | A |
5987302 | Driscoll et al. | Nov 1999 | A |
5987443 | Nichols et al. | Nov 1999 | A |
6000945 | Sanchez-Lazer et al. | Dec 1999 | A |
6020886 | Jacober et al. | Feb 2000 | A |
6030226 | Hersh | Feb 2000 | A |
6045515 | Lawton | Apr 2000 | A |
6047261 | Siefert | Apr 2000 | A |
6072113 | Tohgi et al. | Jun 2000 | A |
6077085 | Parry et al. | Jun 2000 | A |
6112049 | Sonnenfeld | Aug 2000 | A |
6118973 | Ho et al. | Sep 2000 | A |
6144838 | Sheehan | Nov 2000 | A |
6155971 | Calhoun et al. | Dec 2000 | A |
6164971 | Figart | Dec 2000 | A |
6166314 | Weinstock et al. | Dec 2000 | A |
6186794 | Brown et al. | Feb 2001 | B1 |
6206700 | Brown et al. | Mar 2001 | B1 |
6213956 | Lawton | Apr 2001 | B1 |
6270352 | Ditto | Aug 2001 | B1 |
6281422 | Kawamura | Aug 2001 | B1 |
6288315 | Bennett | Sep 2001 | B1 |
6293801 | Jenkins et al. | Sep 2001 | B1 |
6334779 | Siefert | Jan 2002 | B1 |
6336813 | Siefert | Jan 2002 | B1 |
6352475 | Mraovic | Mar 2002 | B1 |
6364666 | Jenkins et al. | Apr 2002 | B1 |
6386883 | Siefert | May 2002 | B2 |
6388181 | Moe | May 2002 | B2 |
6390918 | Yagi et al. | May 2002 | B1 |
6418298 | Sonnenfeld | Jul 2002 | B1 |
6419496 | Vaughn | Jul 2002 | B1 |
6435508 | Tavel | Aug 2002 | B1 |
6480698 | Ho et al. | Nov 2002 | B2 |
6484010 | Sheehan | Nov 2002 | B1 |
6486388 | Akahori | Nov 2002 | B2 |
6513042 | Anderson et al. | Jan 2003 | B1 |
6514084 | Thomas | Feb 2003 | B1 |
6526258 | Bejar et al. | Feb 2003 | B2 |
6565359 | Calhoun et al. | May 2003 | B2 |
6582235 | Tsai et al. | Jun 2003 | B1 |
6629892 | Oe et al. | Oct 2003 | B2 |
6644973 | Oster | Nov 2003 | B2 |
6648648 | O'Connell | Nov 2003 | B1 |
6676412 | Masterson et al. | Jan 2004 | B1 |
6676413 | Best et al. | Jan 2004 | B1 |
6688889 | Wallace et al. | Feb 2004 | B2 |
6699123 | Matsuura et al. | Mar 2004 | B2 |
6716033 | Lassowsky | Apr 2004 | B1 |
6751439 | Tice et al. | Jun 2004 | B2 |
6755657 | Wasowicz | Jun 2004 | B1 |
6755661 | Sugimoto | Jun 2004 | B2 |
6827578 | Krebs et al. | Dec 2004 | B2 |
6905340 | Stansvik | Jun 2005 | B2 |
6915286 | Policastro et al. | Jul 2005 | B2 |
6978115 | Whitehurst et al. | Dec 2005 | B2 |
6978244 | Rovinelli et al. | Dec 2005 | B2 |
7024398 | Kilgard et al. | Apr 2006 | B2 |
7122004 | Cassily | Oct 2006 | B1 |
7182600 | Shaw et al. | Feb 2007 | B2 |
7184701 | Heslip | Feb 2007 | B2 |
7199298 | Funaki | Apr 2007 | B2 |
7220907 | McIntosh | May 2007 | B2 |
7294107 | Simon et al. | Nov 2007 | B2 |
7451065 | Pednault et al. | Nov 2008 | B2 |
7653931 | Peterson et al. | Jan 2010 | B1 |
7775866 | Mizuguchi et al. | Aug 2010 | B2 |
D627681 | Lelardoux et al. | Nov 2010 | S |
8083523 | De Ley et al. | Dec 2011 | B2 |
8137106 | De Ley et al. | Mar 2012 | B2 |
8210851 | Wade et al. | Jul 2012 | B2 |
8491311 | Bodner et al. | Jul 2013 | B2 |
8577280 | Hutchinson et al. | Nov 2013 | B2 |
D704736 | Mariet et al. | May 2014 | S |
D765139 | Hu | Aug 2016 | S |
D821426 | Kim et al. | Jun 2018 | S |
D822047 | Wills et al. | Jul 2018 | S |
D824949 | Wu et al. | Aug 2018 | S |
20010018178 | Siefert | Aug 2001 | A1 |
20010023059 | Toki | Sep 2001 | A1 |
20010036620 | Peer et al. | Nov 2001 | A1 |
20010041330 | Brown et al. | Nov 2001 | A1 |
20010046659 | Oster | Nov 2001 | A1 |
20010055749 | Siefert | Dec 2001 | A1 |
20020002411 | Higurashi et al. | Jan 2002 | A1 |
20020005109 | Miller | Jan 2002 | A1 |
20020032733 | Howard | Mar 2002 | A1 |
20020042790 | Nagahara | Apr 2002 | A1 |
20020076684 | Blevins et al. | Jun 2002 | A1 |
20020102522 | Sugimoto | Aug 2002 | A1 |
20020142278 | Whitehurst et al. | Oct 2002 | A1 |
20020150868 | Yui et al. | Oct 2002 | A1 |
20020160347 | Wallace et al. | Oct 2002 | A1 |
20020168100 | Woodall | Nov 2002 | A1 |
20020169822 | Packard et al. | Nov 2002 | A1 |
20020177113 | Sherlock | Nov 2002 | A1 |
20020188583 | Rukavina et al. | Dec 2002 | A1 |
20030009352 | Bolotinikov et al. | Jan 2003 | A1 |
20030017442 | Tudor et al. | Jan 2003 | A1 |
20030017443 | Kilgore | Jan 2003 | A1 |
20030027122 | Stansvik | Feb 2003 | A1 |
20030039948 | Donahue | Feb 2003 | A1 |
20030059759 | Calhoun et al. | Mar 2003 | A1 |
20030077559 | Braunberger et al. | Apr 2003 | A1 |
20030113697 | Plescia | Jun 2003 | A1 |
20030129574 | Ferriol et al. | Jul 2003 | A1 |
20030129576 | Wood et al. | Jul 2003 | A1 |
20030148253 | Sacco et al. | Aug 2003 | A1 |
20030151628 | Salter | Aug 2003 | A1 |
20030151629 | Krebs et al. | Aug 2003 | A1 |
20030157469 | Embretson | Aug 2003 | A1 |
20030165800 | Shaw et al. | Sep 2003 | A1 |
20030167902 | Hiner et al. | Sep 2003 | A1 |
20030167903 | Funaki | Sep 2003 | A1 |
20030176931 | Pednault et al. | Sep 2003 | A1 |
20030232318 | Altenhofen et al. | Dec 2003 | A1 |
20040005536 | Lai et al. | Jan 2004 | A1 |
20040007118 | Holcombe | Jan 2004 | A1 |
20040014017 | Lo | Jan 2004 | A1 |
20040014021 | Suleiman | Jan 2004 | A1 |
20040033475 | Mizuma et al. | Feb 2004 | A1 |
20040039603 | Hanrahan | Feb 2004 | A1 |
20040111310 | Sziam et al. | Jun 2004 | A1 |
20040137984 | Salter | Jul 2004 | A1 |
20040166484 | Budke et al. | Aug 2004 | A1 |
20040180317 | Bodner et al. | Sep 2004 | A1 |
20040237756 | Forbes | Dec 2004 | A1 |
20040244564 | McGregor | Dec 2004 | A1 |
20040260584 | Terasawa | Dec 2004 | A1 |
20050064375 | Blank | Mar 2005 | A1 |
20070046678 | Peterson et al. | Mar 2007 | A1 |
20070134630 | Shaw et al. | Jun 2007 | A1 |
20070257906 | Shimura et al. | Nov 2007 | A1 |
20070265083 | Ikebata | Nov 2007 | A1 |
20070281285 | Jayaweera | Dec 2007 | A1 |
20090081626 | Milgram et al. | Mar 2009 | A1 |
20090325137 | Peterson et al. | Dec 2009 | A1 |
20100209896 | Weary et al. | Aug 2010 | A1 |
20130244217 | Potts et al. | Sep 2013 | A1 |
20140186816 | Peterson et al. | Jul 2014 | A1 |
Number | Date | Country |
---|---|---|
WO2007028142 | Mar 2007 | WO |
Entry |
---|
Kennedy, Brian. Tetris Plus—Jaleco. Review and description of Tetris Plus [online], [retrieved on May 29, 2013]. Retrieved from the Internet <URL: http://dextremes.com/sega/revs/tetrisplus.html>. |
Thompson, Jon. Tetris Plus Review. [online], [retrieved on May 29, 2013]. Retrieved from the Internet <URL: http://www.allgame.com/game.php?id=1968&tab=review>. |
Hu, W. et al., “Dyanmics of Innate Spatial-Temporal Learning Process”. 6th Annual International Conference on Complex Systems (May 2004). |
Bodner and Shaw, “Symmetry Math Video Game Used to Train Profound Spatial-Temporal Reasoning Abilities Equivalent to Dynamical Knot Theory” American Mathematical Society (2004); vol. 34; pp. 189-202. |
Bodner and Shaw, “Symmetry Operations in the Brain: Music and Reasoning” (2001); pp. 1-30. |
Bodner M, Shaw GL, “Music Math Connection” Journal of music and movement based learning. (2002) vol. 8, No. 3; pp. 9-16. |
Bodner, Peterson, Rodgers, Shaw et al., “Spatial-Temporal (ST) Math Video Game Results Show Rapid Learning Curves Supportive of Innate ST Brain Function” Oct. 9, 2006; ScholarOne, Inc., 2000 (2001); 1 page. |
Hu W, Bodner M, Jones EG, Peterson M, Shaw GL., “Dynamics of Innate Spatial-Temporal Learning Process: Data Driven Education Results Identify Universal Barriers to Learning” 6th Annual International Conference on Complex Systems (2004); 8 pages. |
Hu W, Bodner M, Jones EG, Peterson MR, Shaw GL, “Data Mining of Mathematical Reasoning Data Relevant to Large Innate Spatial-Temporal Reasoning Abilities in Children: Implications for Data Driven Education” Soc. Neurosci. Abst. 34th annual meeting (2004); 1 page. |
M.I.N.D.® Institute, Research Division, “Cramming v. Understanding”, Position Paper #4, Feb. 2003, 1 page. |
M.I.N.D.® Institute, Research Division, “Education=Music Math Causal Connection”, Position Paper #1, Jul. 2002, 2 pages. |
M.I.N.D.® Institute, Research Division, “The race to raise a brainer baby”, Position Paper #2, Aug. 2002, 1 page. |
M.I.N.D.® Institute, Research Division, “Trion Music Game: Breakthrough in the Landmark Math + Music Program”, Position Paper #3, Jan. 2003, 1 page. |
Peterson, Bodner, Rodgers, Shaw et al., Music—Math Program Based on Cortical Model Enhances 2nd Graders Performance on Advanced Math Concepts and Stanford 9 Math; Oct. 9, 2006; ScholarOne, Inc. (2000); 1 page. |
Peterson, Bodner, Shaw et al., “Innate Spatial-Temporal Reasoning and the Identification of Genius” Neurological Research, vol. 26, Jan. 2004; W.S. Maney & Son Ltd.; pp. 2-8. |
Peterson, Shaw et al., “Enhanced Learning of Proportional Math Through Music Training and Spatial-Temporal Training” Neurological Research(1999) vol. 21; Forefront Publishing Group; pp. 139-152. |
Shaw et al., “Music Training Causes Long-Term Enhancement of Preschool Children's Spatial-Temporal Reasoning” Neurological Research (1997); vol. 19, No. 1; pp. 1-8; Forefront Publishing Group, Wilton, CT, USA. |
Shaw GL, Bodner M, Patera J “Innate Brain Language and Grammar: Implications for Human Language and Music” In Stochastic Point Processes (eds Srinivasan SK and Vihayakumar A). Narosa Publishing, New Dehli (2003); pp. 287-303. |
Shaw, G.L., “Keeping Mozart in Mind,” M.I.N.D. Institute/University of California, Academic Press, 2000, Cover Page, Table of Contents, Chapters 2, 12, 13, 14, 18, 19, 20, 23. |
Special Report—Summary of the 2002 M.I.N.D.® Institute newsletter which details data from 2nd graders in our Music Spatial-Temporal Math Program (2002) vol. 1, Issue 2; pp. 1-12. |
Today@UCI: Press Release: Piano and Computer Training Boost Student Math Achievement, UC Irvine Study Shows [online]; Mar. 15, 1999 [retrieved on Mar. 16, 2008]; Retrieved from the Internet: URL:http://today.uci.edu/news/release_detail.asp?key=646. |
Watson S. Wind M, Yee M, Bodner M, Shaw GL., “Effective Music Training for Children with Autism” Early Childhood Connections, (2003); vol. 9; pp. 27-32. |
International Search Report, Application No. PCT/US06/34462, dated Aug. 30, 2007, 2 pgs. |
“The Stochastic Learning Curve: Optimal Production in the Presence of Learning-Curve Uncertainty”, Joseph B. Mazzola and Kevin F. McCardle; Source: Operations Research, vol. 45, No. 3 (May-Jun. 1997), pp. 440-450. |
“Toward a Theory of continuous Improvement and the Learning Curve”, Willard I. Zangwill and Paul B. Kantor; Management Science, vol. 44, No. 7 (Jul. 1998), pp. 910-920. |
“Rigorous Learning Curve Bounds from Statistical Mechanics” by David Haussler, Michael Kearns, H. Sebastian Seung, Naftali Tishby, (1996), Machine Learning 25, pp. 195-236. |
“Seer: Maximum Likelihood Regression for Learning-Speed Curves”, Carl Myers Kadie, Graduate college of the University of Illinois at Urbana-Champaign, 1995, pp. 1-104. |
Yelle, Louis E., “The Learning Curve: Historical Review and Comprehensive Survey”, Decision Sciences, vol. 10, Issue 2, pp. 302-328 (Apr. 1979). |
Restriction-Election Requirement for U.S. Appl. No. 13/729,493, dated Mar. 30, 2015. |
Response to Election/ Restriction for U.S. Appl. No. 13/729,493, dated May 7, 2015. |
Non-Final Rejection (1) for U.S. Appl. No. 13/729,493, dated Jun. 26, 2015. |
Response after Non-Final Action (1) for U.S. Appl. No. 13/729,493, dated Sep. 25, 2015. |
Non-Final Rejection (2) for U.S. Appl. No. 13/729,493, dated Dec. 7, 2015. |
Response after Non-Final Action (2) for U.S. Appl. No. 13/729,493, dated Mar. 7, 2016. |
Final Rejection for U.S. Appl. No. 13/729,493, dated May 25, 2016. |
Response after Final Action for U.S. Appl. No. 13/729,493, dated Oct. 25, 2016. |
Notice of Allowance for U.S. Appl. No. 13/729,493, dated Dec. 16, 2016. |
Response to Reasons for Allowance for U.S. Appl. No. 13/729,493, dated Mar. 14, 2017. |
Number | Date | Country | |
---|---|---|---|
20090325137 A1 | Dec 2009 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11218282 | Sep 2005 | US |
Child | 12494154 | US |