Educational video games may present users with learning material and associated challenges that facilitate the learning of the material. Some educational video games may also gauge a user's retention of the learning material, such as by monitoring correct and incorrect answers in a testing session. With some users, for example children, interactive video games may provide an engaging experience that is conducive to learning.
Embodiments are disclosed that relate to assessing a user's ability to recognize a target item by reacting to the target item and performing a target gesture. For example, one disclosed embodiment provides a method of assessing a user's ability to recognize a target item from a collection of learning items that includes the target item. The method comprises providing to a display device the learning items in a sequence, and while providing the learning items to the display device, receiving input from a sensor to recognize a user gesture made by the user. The method includes determining whether the user gesture is received within a target timeframe corresponding to the target item. If the user gesture is received within the target timeframe, then the method includes determining whether the user gesture matches a target gesture. If the user gesture matches the target gesture, then the method includes providing to the display device a reward image for the user.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Embodiments are disclosed that relate to assessing a user's ability to recognize a target item by reacting to the target item and performing a target gesture. With reference to
The computing system 14 includes computing device 26, such as a video game console, and a display device 22 that receives media content from the computing device. Other examples of suitable computing devices 26 include, but are not limited to, set-top boxes (e.g. cable television boxes, satellite television boxes), digital video recorders (DVRs), desktop computers, laptop computers, tablet computers, home entertainment computers, network computing devices, and any other device that may provide content to a display device 22 for display.
The computing system 14 may also include a sensor 30 that is coupled to the computing device 26. In some embodiments, the sensor 30 may be separate from the computing device as shown in
Data from the sensor 30 may be used to recognize a user gesture 34 made by the user 18. In the example shown in
With reference now to
It will also be appreciated that media content and other data may be received by the computing device 26 from one or more remote content sources, illustrated in
With reference now to
It will be appreciated that in some embodiments, method 300 may be performed as one or more segments within a learning episode or program designed to teach educational material to the user 18. For example, an interactive educational video may be designed to teach children to learn letters of an alphabet and/or numbers. At the beginning of the educational video, a passive video segment may introduce one or more letters and/or numbers to the child. Thereafter, at one or more segments of the educational video, the method 300 may be performed to assess the child's ability to recognize the letters and/or numbers previously presented. In the following description, these segments during which the method 300 may be performed will referred to as assessment segments.
As a more specific example, a passive video segment may introduce one Letter of the Day and one Number of the Day to the user 18. The method 300 may then be performed at one point during the video to assess the user's ability to recognize the Letter of the Day, and at another point during the video to assess the user's ability to recognize the Number of the Day. In another example, one or more of the letters and/or numbers may be presented as stylized characters, such as the target item 38 illustrated in
It will also be appreciated that in some embodiments, method 300 may include a process for determining whether a user 18 is present and ready to participate in an assessment segment. For example, data from the sensor 30 may be used to determine how many users are present in the media presentation environment 10. If more than one user 18 is present, then a separate multi-user game may be provided to the display device 22 for display to the users. If no users are present, then a user absent experience video may be provided to the display device 22 for display to the media presentation environment 10. If no users are found after a predetermined time, such as 5 minutes, 10 minutes, or any other suitable time, then a second user absent experience video may be provided to the display device 22.
If a user 18 is found before the second user absent experience video is completed, then it may be determined whether the user is ready to participate in an assessment segment. If the user 18 is not ready to participate, then a user passive experience video may be provided to the display device. If the user 18 is still not ready to participate after a predetermined time, such as 10 minutes, then a second user passive experience video may be provided to the display device 22.
If it is determined that the user 18 is ready to participate, then in some embodiments the method may provide an assessment segment introduction video to the display device 22. In one example, the assessment segment introduction video may introduce and explain to the user 18 an assessment challenge game that assesses the user's ability to recognize a Letter of the Day or a Number of the Day from a sequence of letters or numbers provided to the display device 22. The user 18 may be instructed to perform a particular gesture or movement, hereinafter referred to as a target gesture, when the user sees the Letter or Number of the Day. In this manner, the method 300 may also assess an ability of the user 18 to perform two skills at one time—in this case, recognizing the Letter or Number of the Day and performing a target gesture in response to recognizing the Letter or Number of the Day.
In one example, the target gesture may comprise the user jumping in place. In another example, the target gesture may comprise a throwing motion that may simulate throwing an imaginary ball toward the target item 38 displayed on the display device 22 (with such target gesture illustrated in
In a more specific example, the user 18 may be asked to practice the target gesture and data from the sensor 30 may be used to determine whether the user performs the target gesture. If the user 18 does not perform the target gesture, an additional tutorial video explaining and/or demonstrating the target gesture may be provided to the display device 22. If the user 18 performs the target gesture, then an assessment segment may commence.
Turning now to
In some embodiments, multiple instances of the target item 38 may be provided to the display device 22 within the sequence of learning items, as indicated at 304. For example, where the target item 38 is the letter “G”, a sequence of 5 letters that contains 2 instances of the target item 38, such as “D, G, B, G, P”, may be provided to the display device 22. It will be appreciated that many different lengths of sequences may be used that contain more or less than 5 characters, such as 3 characters, 7 characters, 9 characters, and other lengths. It will also be appreciated that many different numbers of instances of the target item 38 may be used within a sequence. For example, a sequence of 5 letters may contain 3 instances of the target item 38, a sequence of 11 letters may contain 5 instances of the target item, etc. It will also be appreciated that various combinations of sequence lengths and instances of the target item may be used.
Further, any suitable manner of presenting the sequence of learning items to the user may be used. For example, each learning item may be displayed individually, one-at-a-time on the display device 22, or two or more learning items may be displayed simultaneously or with some overlap in the display of each learning item. In another example, the learning items may appear on the display device 22 by entering from one side or edge of the display device, and may remain on the display device for a predetermined period of time, such as 1 second, 3 seconds, 5 seconds, or other suitable time. The learning item may also exit the display device 22 by moving to the left, right, top or bottom of the display device until the learning item is no longer visible.
In some embodiments, the target item 38 may be presented as at least a first learning item and a last learning item provided to the display device 22 in the sequence, as indicated at 306. For example, where the target item 38 is the letter “G”, a sequence of 5 letters that contains the target item 38 as the first item and the last item in the sequence, such as “G, D, P, D, G”, may be provided to the display device 22. It will be appreciated that a sequence may also include other instances of the target item 38 in addition to the target item being the first item and the last item in the sequence.
In some embodiments, the sequence of learning items may be provided to the display device 22 as video content comprising multiple layers of video that are synchronously streamed to the display device, as indicated at 308. In other embodiments, the sequence of learning items may be provided to the display device 22 by branching between at least a first buffered video content and a second buffered video content, as indicated at 310. It will be appreciated that the sequence of learning items may be provided to the display device 22 by branching to additional buffered video content.
Continuing with
At 318, the method includes using input received from the sensor 30 to detect whether the user gesture 34 is received within a threshold number of instances of the target item. In one embodiment, if the user gesture 34 is not received within a first threshold number of instances of the target item, then a first reaction reminder may be provided to the user 18, as indicated at 320. In one example, the first reaction reminder may comprise audio feedback, such as a voice over prompt provided via display device 22, that encourages the user to react when the user sees the target item. As a specific example, the voice over prompt may tell the user 18, “Don't forget to throw your ball when you see the letter G.” The first threshold number of instances may be 1, 2, 3 or any other suitable number of instances.
After providing the first reaction reminder, and with reference to 324 in
Returning to
In another embodiment, when the user fails to react to a third threshold number of instances of the target item, a separate video game may be provided to the user 18 via display device 22. In one example, the separate video game may include interactive components that encourage the user 18 to become physically active. In this embodiment, the method 300 may exit the sequence of learning items before all instances of the target item have been provided to the display device 22. The third threshold number of instances may be 3, 4, 5 or any other suitable number of instances. It will be appreciated that in other embodiments, the method 300 may also comprise determining whether the user 18 has failed to react to one or more additional threshold numbers of instances.
Returning to
In some embodiments, if the user gesture 34 is not received within the target timeframe corresponding to the target item 38, then a first hint may be selected from a hint structure and provided to the display device 22. For example, if the user 18 reacts by performing the user gesture 34 while a learning item that is not the target item 38 is displayed on display device 22, then the user gesture will not be received within the target timeframe. In one example, the hint structure may comprise a file or data structure in the data-holding subsystem 42 that contains multiple hints. The first hint may comprise one or more of audio and visual feedback. In one example, the first hint may comprise audio feedback, such as a voice over prompt provided via display device 22, that informs the user that the user has reacted to a learning item that is not the target item. For example, the voice over prompt may tell the user 18, “Hmmm . . . that's not the letter “G”. Please try again.”
In other embodiments, where the user gesture 34 is not received within the target timeframe corresponding to the target item 38, an incorrect answer instance may be stored in the data-holding subsystem 42, as indicated at 330. As discussed above, the incorrect answer instance may be used in the performance measure provided to the display device 22.
If the incorrect answer instance is a second incorrect answer instance, then the method 300 may provide a second hint that provides different support than the first hint previously provided to the display device 22, as indicated at 332. In one example indicated at 334, the first hint may comprise only audio feedback as described above, and the second hint may comprise audio and visual feedback, such as a character appearing on the display device 22 who reiterates the instructions for the assessment segment to the user 18. In a more specific example, the character may tell the user 18, “I'd like you to throw your ball when you see the letter G.” The character may also demonstrate the target gesture as the letter G appears on the display device 22.
After a hint has been provided, the method 300 may determine whether there are any remaining target item instances to be provided to the display device 22, as indicated at 324 in
Returning to
If the user gesture 34 does not match the target gesture, then a target gesture reminder may be provided to the display device 22, as indicated at 338. In one example, the target gesture reminder may comprise audio feedback, such as a voice over prompt provided via display device 22, that reminds the user to perform the target gesture when the user sees the target item. For example, the voice over prompt may tell the user 18, “Now remember, the Gesture of the Day is jumping. You need to jump when you see the letter G.” In another example, the target gesture reminder may comprise audio and visual feedback, such as a character appearing on the display device 22 who reminds the user 18 to perform the target gesture when the user sees the target item. In a more specific example, the character may verbally remind the user 18 and may demonstrate the target gesture as the letter G appears on the display device 22.
After a target gesture reminder has been provided, the method 300 may determine whether there are any remaining target item instances to be provided to the display device 22, as indicated at 324 in
Returning to 336, if the user gesture 34 matches the target gesture, then the user 18 has correctly reacted to the target item within the target timeframe corresponding to the target item, and has performed the target gesture. A reward image is then provided to the display device 22 for the user 18, as indicated at 340. In one example, the reward image comprises animated images of sparkles and colorful fireworks, and/or the target item being animated in a festive, celebratory manner. In another example, the reward image may include a character congratulating the user on a correct answer.
In some embodiments, when the user gesture 34 matches the target gesture, a correct answer instance is stored in the data-holding subsystem 42, as indicated at 342. As discussed above, the correct answer instance may be used in the performance measure provided to the display device 22 at 346.
In other embodiments, the reward image may be customized based on one or more factors, as indicated at 344. For example, the reward image may be customized to correspond to the target gesture performed by the user 18. In a more specific example, where the target gesture is a throwing motion that simulates throwing an imaginary ball at the target item 38, the reward image may be customized to simulate a ball impacting the display device and “exploding” into animated sparkles and fireworks.
In another example, the reward image may be customized to correspond to a number of correct answers given by the user 18. In a more specific example, upon the first correct answer the reward image may be customized to display a first level of sparkles and fireworks. Upon the second correct answer, the reward image may be customized to provide a second level of sparkles and fireworks that is greater than the first level. In another example, upon a third correct answer, the reward image may be customized to provide a third level of sparkles and fireworks that is greater than the second level, and may also include a character who praises the user 18. It will be appreciated that other forms, levels and combinations of reward image customization may be provided.
In other embodiments, the pace of the display of the learning items may be increased upon each correct answer given by the user. For example, where an initial pace comprises each learning item remaining on the display for N seconds, upon each correct answer the pace of the display of the learning items may increase such that each learning item remains on the display for N−1 seconds. It will be appreciated that any suitable amount and/or formula for increasing the pace of display of the learning items may be used. In some embodiments, the current pace may be reset to a slower pace when an incorrect answer is given by the user.
After a reward image has been provided, the method 300 may determine whether there are any remaining target item instances to be provided to the display device 22, as indicated at 324 and described above. If there are no remaining target items to be provided to the display device 22, then the method 300 may provide to the display device 22 a performance measure, as indicated at 346 and described above. After providing the performance measure, the method 300 may end.
It will be appreciated that the order of the above-described methods and processes may be varied. For example, upon determining that a user gesture 34 is not within the target timeframe corresponding to a target item, the method 300 may next determine whether the user gesture 34 matches the target gesture. If the user gesture 34 does not match the target gesture, then the method 300 may provide a gesture reminder to the user.
With reference now to
As explained above,
Logic subsystem 40 may include one or more physical devices configured to execute one or more instructions. For example, the logic subsystem 40 may be configured to execute one or more instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more devices, or otherwise arrive at a desired result.
The logic subsystem 40 may include one or more processors that are configured to execute software instructions. Additionally or alternatively, the logic subsystem 40 may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. Processors of the logic subsystem 40 may be single core or multicore, and the programs executed thereon may be configured for parallel or distributed processing. The logic subsystem 40 may optionally include individual components that are distributed throughout two or more devices, which may be remotely located and/or configured for coordinated processing. One or more aspects of the logic subsystem 40 may be virtualized and executed by remotely accessible networked computing devices configured in a cloud computing configuration.
Data-holding subsystem 42 may include one or more physical, non-transitory, devices configured to hold data and/or instructions executable by the logic subsystem 40 to implement the herein described methods and processes. When such methods and processes are implemented, the state of data-holding subsystem 42 may be transformed (e.g., to hold different data).
Data-holding subsystem 42 may include removable media and/or built-in devices, such as DVD 46. Data-holding subsystem 42 may include optical memory devices (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory devices (e.g., RAM, EPROM, EEPROM, etc.) and/or magnetic memory devices (e.g., hard disk drive, floppy disk drive, tape drive, MRAM, etc.), among others. Data-holding subsystem 42 may include devices with one or more of the following characteristics: volatile, nonvolatile, dynamic, static, read/write, read-only, random access, sequential access, location addressable, file addressable, and content addressable. In some embodiments, logic subsystem 40 and data-holding subsystem 42 may be integrated into one or more common devices, such as an application specific integrated circuit or a system on a chip.
It is to be appreciated that data-holding subsystem 42 includes one or more physical, non-transitory devices. In contrast, in some embodiments aspects of the instructions described herein may be propagated in a transitory fashion by a pure signal (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for at least a finite duration. Furthermore, data and/or other forms of information pertaining to the present disclosure may be propagated by a pure signal.
Display subsystem 44 may be used to present a visual representation of data held by data-holding subsystem 42. As the herein described methods and processes change the data held by the data-holding subsystem 42, and thus transform the state of the data-holding subsystem 42, the state of display subsystem 44 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 44 may include one or more display devices, such as display device 22, utilizing virtually any type of technology. Such display devices may be combined with logic subsystem 40 and/or data-holding subsystem 42 in a shared enclosure, or such display devices may be peripheral display devices.
It is to be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or in some cases omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.