Microphone-based video games allow a player to interact with a video game via a microphone. For example, a karaoke game may allow one or more players to sing along with a song and to be scored based upon one or more aspects of the players' performance, such as tone, pitch, timing, accuracy in following lyrics displayed during game play, etc.
Where such games are intended for one or more players, it may be cumbersome a player to join a game while other players are currently playing the game. For example, a player joining mid-game may have to use a handheld controller other than the microphone to navigate through menus or the like before game play resumes with that player added. Such disruptions may discourage potential players from joining a game already in progress. Further, feedback provided during such games, such as cheering by a “virtual audience” in the video game, is generally provided when a player achieves a pre-defined criteria in the game, and is not otherwise initiated by player actions.
Accordingly, various embodiments are disclosed herein that relate to the control of a microphone-based video game. For example, one disclosed embodiment provides a method of operating a microphone-based video game, wherein the method comprises presenting the video game on a display, receiving one or more motion sensor signals from a microphone comprising a motion sensor, detecting a change of state of the microphone between an in-use state and an inactive state based upon the one or more motion sensor signals received from the microphone, and in response to the change of state of the microphone, changing a number of representations of players displayed in the video game.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Embodiments are disclosed herein that related to the control of a microphone-based video game via a microphone, rather than via a separate handheld controller, console, or the like.
Referring to the display 16 shown in
In conventional microphone-based games, joining an additional player to a game may involve various steps that are disruptive to the experience of other players currently playing the game. For example, the addition of a new player may involve selecting an “add a player” control from a menu accessible via a control on the video game console 30 or on a handheld controller (not shown) other than a microphone. This may cause a new player to wait until a song selection is over before joining a game, and therefore may discourage new players from joining.
Thus,
Next, method 200 comprises, at 204, receiving a set of motion signals from a motion sensor on the microphone. The set of motion signals may comprise, for example, a set of signals that represents a microphone being lifted from a surface after a period of inactivity, being lifted and then shaken, being set down onto a surface after a period of activity, or any other suitable combination of motion events.
In response to the set of motion sensor signals received from the motion sensor on the microphone, method 200 next comprises, at 206, detecting a change of state of the microphone between an in-use state and an inactive state. For example, where the set of motion sensor signals represents a microphone being lifted from a surface after a period of inactivity, or lifted and then shaken after a period of inactivity, etc., it may be determined that the microphone has changed from an inactive state to an in-use state. Likewise, where the set of motion sensor signals represents a microphone being placed onto a surface and then left at rest for longer than a threshold period, then it may be determined that the microphone has changed from an in-use state to an inactive state. It will be understood that these motion patterns are disclosed for the purpose of example, and are not intended to be limiting in any manner.
In response to detecting the change of state of the microphone between an in-use state and an inactive state, method 200 next comprises, at 208, changing a number of representations of players displayed in the video game. For example, if it is determined that a change from an inactive state to an in-use state has occurred, then a representation of a player may be added to the video game. Likewise, if it is determined that a change from an in-use state to an inactive state has occurred, then a representation of a player may be removed from the game. In this manner, players may join and leave a microphone-based video game simply by performing a gesture with the microphone, where the gesture may be as simple as lifting the microphone from a surface, lifting and then shaking the microphone, by placing a microphone down on a surface, or any other suitable gesture or combination of gestures that follows a period of inactivity.
Continuing with
If the additional player is the first player to join the game, then, at 308, method 300 comprises changing the presentation of the video game from a background entertainment mode to an active mode. A background entertainment mode, for example, may be a video game mode that is configured to entice potential players to pick up a microphone, and thereby be automatically joined to the game. Such a mode may be configured to play music videos, to play a trailer for the microphone video game, or to present any other suitable content. In any case, whether the additional player is a first player or is joining other current players, method 300 next comprises, at 310 presenting the video game with a second number of players.
Next, method 300 comprises, at 311, receiving a second set of signals from the microphone during game play, wherein the second set of signals comprises one or more of voice signals and motion signals. Then, at 312, method 300 comprises detecting a predetermined control pattern that is distinguishable from the vocal and gesture inputs received during normal game play. The predetermined control pattern represents a pattern that a player may input into the microphone via gestures, vocals or other audio input (e.g. hitting microphone with hand), and/or combinations thereof, that is configured to cause a pre-determined control action to be performed by the video game system. For example, as shown at 314, in response to the predetermined control signal, method 300 may comprise changing a play/pause state of music being played in a video game. As a more specific example, if a player wishes to pause playback of a video game, a player may shake the microphone. Therefore, in this example, the control signal comprises a shaking gesture performed by a currently in-use, as opposed to inactive, microphone. Likewise, once the player wishes to resume play, the player may again shake the microphone to cause playback to proceed. It will be understood that a control signal also may be an audio signal, and/or a combination of audio and motion signals. Further, it will be understood that the specific examples of control signals herein are described for the purpose of example, and are not intended to be limiting in any manner.
Continuing with
The threshold period may have any suitable duration. For example, in some embodiments, it may be desired to allow players to place a microphone down on a table or other surface to get a snack or drink, to use the restroom, etc. without removing the player from the game, as removing the player from the game may have various consequences, such as the removal of the player's score from the game, that are undesirable to a player who wishes to take only a brief break from the game. In this example, a suitable threshold duration may comprise a duration of 2 minutes or less. In a more specific example, a threshold duration of 30-60 seconds, such as 45 seconds, may be used. It will be understood that these specific times and ranges of times are presented for the purpose of example, and are not intended to be limiting in any manner. Further, it will be understood that a video game system may utilize any subset of the various control signals and responses described in the context of
The removal of the player from the game upon detecting a period of inactivity in the set of motion signals from the microphone may be performed in any suitable manner. For example, referring to
While the examples of
Next, at 606, method 600 comprises receiving a set of signals from a microphone in use by a player, wherein the set of signals comprises one or more of a representation of a microphone gesture (e.g. motion signal 608) and a representation of a vocal sample (e.g. voice signal 610) for the virtual audience to repeat back to the player. The motion signal may comprise a microphone gesture repeated by the player, such as a player waiving his or her arms back and forth. This is illustrated in
Such gestures and/or vocal samples may be input at a specified portion in the game designated for such interaction with the virtual audience, or may be performed spontaneously by a player at any point in the game. Where the gestures and/or vocal samples are performed spontaneously, the gestures and/or vocal samples may further comprise a pre-selected control signal that triggers the video game to display the audience mimicking the player's gesture and/or vocal input. For example, in the case of a gesture, the repetition of a gesture more than a threshold number of times may be configured to cause the video game to display the audience mimicking the action. Likewise, in the case of the vocal input, a gesture used in combination with the vocal input, such as the player pointing the microphone toward the display screen after singing a desired vocal phrase, may be configured to cause the video game to repeat the vocal sample.
In the case of the vocal sample, the duration of the sample to be played back may be determined in any suitable manner. For example, in some embodiments, a vocal sample of a fixed time length may be played back to the player. In other embodiments, a vocal sample may be parsed to locate a vocal pause that determined to be between spoken or sung phrases (e.g. lasts a predetermined duration, is preceded by a drop in vocal tone or pitch, etc.), and playback may begin at the vocal pause. It will be understood that these methods for determining where to begin playback of a vocal sample are presented for the purpose of example, and are not intended to be limiting in any manner.
Continuing with
The entertainment controller 802 may comprise programs or code stored in memory 810 and executable by the processor 812 to perform the various video game control methods disclosed herein. Generally, programs include routines, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types. The term “program” as used herein may connote a single program or multiple programs acting in concert, and may be used to denote applications, services, or any other type or class of program.
Continuing with
In some embodiments, the microphone 804 may further comprise a plurality of light sources, shown as light source 1, light source 2, and light source n at 832, 834, and 836, respectively, configured to provide an additional player feedback mechanism. Each light source may comprise any suitable components, including but not limited to light bulbs, LEDs, lasers, as well as various optical components to direct light to outlets located at desired locations on the microphone casing.
It will be appreciated that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies such as event-driven, interrupt-driven, multi-tasking, multi-threading, and the like. As such, various acts illustrated may be performed in the sequence illustrated, in parallel, or in some cases omitted. Likewise, the order of any of the above-described processes is not necessarily required to achieve the features and/or results of the embodiments described herein, but is provided for ease of illustration and description. The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.