1. Field of the Invention
The present invention relates to game control technology and, particularly, to a game device that allows a user to control an object such as a game character by using an input device.
2. Description of the Related Art
In association with the improvement in the processing capabilities of game devices, a great number of games are now available that allow a user to control a game character in a game field created by three-dimensional modeling. There is also proposed a technology of capturing an image of a user-manipulated real object with a camera and using the motion of the real object captured with the camera image as an input to the game, as well as using an input provided by a button operation.
An input device and a game are closely related to each other. As a novel input device becomes available, game applications that utilize the uniqueness of the input device are developed. Propelled by the need to service diversified demand from users, we have come to hold an idea of a novel game application that is run in coordination with an input device manipulated by a user.
A purpose of the present invention is to provide a technology of reflecting the motion of an input device manipulated by a user in the action of an object in a game.
In order to address the above-described issue, the computer program according to one embodiment of the present invention embedded in a non-transitory computer-readable recording medium, comprises: a module configured to determine whether control information indicating an operation of an input device meets a predetermined condition for operation; and a module configured to cause, when the condition for operation is determined to be met, an object to perform an action mapped to the condition for operation. The predetermined condition for operation requires that the input device be moved by a predetermined amount within a predetermined period of time. The determination module measures time elapsed since the start of a movement of the input device and determines, when the input device is moved by the predetermined amount within the predetermined period of time, that the condition for operation is met.
The computer program according to another embodiment of the present invention embedded in a non-transitory computer-readable recording medium, comprises: a module configured to determine whether control information indicating an operation of an input device meets a predetermined condition for operation; and a module configured to cause, when the condition for operation is determined to be met, an object to perform an action mapped to the condition for operation. The condition for operation to cause the object to jump in a game space requires that the input device is directed substantially in the vertical direction and that the input device be moved upward in the vertical direction. When the determination module determines that the condition for operation to cause the object to jump is met, the module for object control causes the object to jump in the game space.
Still another embodiment of the present invention relates to an object control method. The method is adapted to cause an object to perform an action in accordance with an operation using an input device, and comprises: acquiring an image of an input device having a light-emitting body; acquiring control information indicating an operation of the input device by referring to positional information of the input device derived from the acquired image, and/or orientation information of the input device; storing the correspondence between a predetermined action of the object and a condition for operation of the input device; determining whether the control information meets the condition for operation; and causing, when the condition for operation is determined to be met, the object to perform an action mapped to the condition for operation. Said storing stores a condition for operation requiring that the input device be moved by a predetermined amount within a predetermined period of time, and said determination measures time elapsed since the start of a movement of the input device and determines, when the input device is moved by the predetermined amount within the predetermined period of time, that the condition for operation is met.
Still another embodiment of the present invention relates to a game device adapted to cause an object to perform an action in accordance with an operation using an input device. The game device comprises; an image acquisition unit configured to acquire an image of an input device having a light-emitting body; a control information acquisition unit configured to acquire control information indicating an operation of an input device by referring to positional information of the input device derived from the acquired image, and/or orientation information of the input device; a storage unit configured to store correspondence between a predetermined action of the object and a condition for operation of the input device; a condition determination unit configured to determine whether the control information meets the condition for operation stored in the storage unit; and an object control unit configured to cause, when the condition for operation is determined by the condition determination unit to be met, the object to perform an action mapped to the condition for operation. The storage unit stores a condition for operation requiring that the input device be moved by a predetermined amount within a predetermined period of time, and the condition determination unit measures time elapsed since the start of a movement of the input device and determines, when the input device is moved by the predetermined amount within the predetermined period of time, that the condition for operation is met. The control information of the input device may be the positional information and/or orientation information of the input device itself or the information derived from the positional information and orientation information by computation.
Optional combinations of the aforementioned constituting elements, and implementations of the invention in the form of methods, apparatuses, systems, recording mediums and computer programs may also be practiced as additional modes of the present invention.
Embodiments will now be described, by way of example only, with reference to the accompanying drawings which are meant to be exemplary, not limiting, and wherein like elements are numbered alike in several Figures, in which:
The invention will now be described by reference to the preferred embodiments. This does not intend to limit the scope of the present invention, but to exemplify the invention.
An embodiment of the present invention provides a game device configured to acquire positional information and/or orientation information in a real space of an input device functioning as a game controller as game control information and capable of running game software in accordance with the control information.
The input device 20 is a user input device that allows a user to provide a command. The game device 10 is a processing device adapted to run a game program in accordance with a user command provided via the input device 20 and generate an image signal indicating a result of game processing.
The input device 20 is driven by a battery and is provided with multiple buttons for providing a user command. As the user operates the button of the input device 20, information on the state of the button is transmitted to the game device 10. The game device 10 receives the button state information from the input device 20, controls the progress of the game in accordance with the user command associated with the button state information, and generates a game image signal. The generated game image signal is output from the display device 12.
The input device 20 has the function of transferring the button state information produced by the user to the game device 10 and is configured according to the embodiment as a wireless controller capable communicating with the game device 10 wirelessly. The input device 20 and the game device 10 may establish wireless connection using the Bluetooth (registered trademark) protocol. The input device 20 may not be a wireless controller but may be connected to the game device 10 using a cable.
The imaging device 14 is a video camera comprising a CCD imaging device, a CMOS imaging device, or the like. The device 14 captures an image of a real space at predetermined intervals so as to generate frame images. For example, the imaging device 14 may capture 60 images per second to match the frame rate of the display device 12. The imaging device 14 is connected to the game device 10 via a universal serial bus (USB) or another interface.
The display device 12 is a display that outputs an image and displays a game screen by receiving an image signal generated by the game device 10. The display device 12 may be a television set provided with a display and a speaker. Alternatively, the display device 12 may be a computer display. The display device 12 may be connected to the game device 10 using a cable. Alternatively, the device 12 may be wirelessly connected using a wireless local area network (LAN).
The input device 20 in the game system 1 according to the embodiment is provided with a light-emitting body. The light-emitting body of the input device 20 is configured to emit light of multiple colors. The color emitted by the light-emitting body can be configured according to a command for light emission from the game device 10. During the game, the light-emitting body emits light of a predetermined color, which is imaged by the imaging device 14. The imaging device 14 captures an image of the input device 20 and generates a frame image, supplying the image to the game device 10. The game device 10 acquires the frame image and derives information on the position of the light-emitting body in the real space in accordance with the position and size of the image of the light-emitting body in the frame image. The game device 10 deals the positional information as a command to control the game and reflects the information in game processing by, for example, controlling the action of a player's character. The game device 10 according to the embodiment is provided with the function of running a game program not only using the button state information of the input device 20 but also using the positional information of the acquired image of the light-emitting body.
The input device 20 is provided with an acceleration sensor and a gyro sensor. The value detected by the sensor is transmitted to the game device 10 at predetermined intervals. The game device 10 acquires the value detected by the sensor and acquires information on the orientation of the input device 20 in the real space. The game device 10 deals the orientation information as a user command in the game and reflects the information in game processing. Thus, the game device 10 according to the embodiment has the function of running a game program using the acquired orientation information of the input device 20. The game device 10 not only deals with the positional information and orientation information of the input device 20 directly as a user command but also deals with information derived from the information by computation as a user command.
The user plays the game viewing the game screen displayed on the display device 12. Because it is necessary to capture an image of the light-emitting body 22 while the game software is being run, the imaging device 14 is preferably oriented to face the same direction as the display device 12. Typically, the user plays the game in front of the display device 12. Therefore, the imaging device 14 is arranged such that the direction of the light axis thereof is aligned with the frontward direction of the display device 12. More specifically, the imaging device 14 is preferably located to include in its imaging range those positions in the neighborhood of the display device 12 where the user can view the display screen of the display device 12. This allows the imaging device 14 to capture an image of the input device 12 held by the user playing the game.
The processing unit 50 comprises a main control unit 52, an input acknowledging unit 54, a three-axis acceleration sensor 56, a three-axis gyro sensor 58, and a light-emission control unit 60. The main control unit 52 exchanges necessary data with the wireless communication module 48.
The input acknowledging unit 54 acknowledges input information from the control buttons 30, 32, 34, 36, 38, and 40 and sends the information to main control unit 52. The three-axis acceleration sensor 56 detects acceleration components in three directions defined by X, Y, and Z axes. The three-axis gyro sensor 58 detects angular velocity on the xz plane, zy plane, and yx plane. In this embodiment, the width direction of the input device 20 is defined as the x-axis, the height direction as the y-axis, and the longitudinal direction as the z-axis. The three-axis acceleration sensor 56 and the three-axis gyro sensor 58 are provided in the handle 24 of the input device 20 and, more preferably, in the center or the neighborhood of the center of the handle 24. Along with the input information from the control buttons, the wireless communication module 48 sends information on the value detected by the three-axis acceleration sensor 56 and information on the value detected by the three-axis gyro sensor 58 to the wireless communication module of the game device 10 at predetermined intervals. The interval of transmission is set to, for example, 11.25 milliseconds.
The light-emission control unit 60 controls light emission from the light-emitting unit 62. The light-emitting unit 62 comprises a red LED 64a, a green LED 64b and a blue LED 64c and is capable of emitting light of multiple colors. The light-emission control unit 60 adjusts light-emission from the red LED 64a, green LED 64b and blue LED 64c so as to cause the light-emitting unit 62 to emit light of a desired color.
In response to a command to emit light from the game device 10, the wireless communication module 48 supplies the command to the main control unit 52, whereupon the main control unit 52 supplies the command to the light-emission control unit 60. The light-emission control unit 60 controls light-emission from the red LED 64a, green LED 64b, blue LED 64c so as to cause the light-emitting unit 62 to emit light of a designated color. For example, the light-emission control unit 60 may controls light emission from the LEDs using pulse width modulation (PWM) control.
The functions of the game device 10 are implemented by a CPU, a memory, and a program or the like loaded into the memory.
When the start button 42 of the input device 20 of the game system 1 according to the embodiment is pressed, a start request is transmitted to the game device 10, turning the power of the game device 10 on. The wireless communication module 48 calls the game device 10 using the identification information identifying the game device 10. The wireless communication module 86 of the game device 10 responds to the call so that connection is established between the wireless communication module 48 and the wireless communication module 86. The input device 20 operates as a master and the game device 10 operates as a slave. After the connection is established, the devices change roles. As a result of the communication process as described above, the input device 20 can transmit information on the status of the control buttons, and information on values detected by the three-axis acceleration sensor 56 and the three-axis gyro sensor 58 to the game device 10 at predetermined intervals.
The wireless communication module 86 receives information on the status of the control buttons and information on values detected by the sensors, which are transmitted by the input device 20, and supplies the information to the input acknowledging unit 88. The input acknowledging unit 88 isolates the button status information from the sensor detection information, delivering the button status information to the application processing unit 100, and the sensor detection information to the device information processing unit 90. The application processing unit 100 receives the button status information as a command to control the game.
The frame image acquisition unit 80 is configured as a USB interface and acquires fame images at a predetermined imaging speed (e.g., 60 frames/sec) from the imaging device 14. The image processing unit 82 extracts an image of the light-emitting body from the frame image. The image processing unit 82 identifies the position and size of the image of the light-emitting body in the frame image.
The image processing unit 82 may binarize the frame image data using threshold RGB values that depend on the emitted color of the light-emitting unit 62 and generate a binarized image. The light-emitting unit 62 emits light of a color designated by the game device 10 so that the image processing unit 82 can identify color of light emitted by the light-emitting unit 62. Therefore, the image of the light-emitting unit 62 can be extracted from the frame image by binarization. Binarization encodes pixel values of pixels having luminance higher than a predetermined threshold value into “1” and encodes pixel values of pixels having luminance equal to or lower than the predetermined threshold value into “0”. This allows the image processing unit 82 to identify the position and size of the image of the light-emitting body from the binarized image. For example, the image processing unit 82 identifies the barycentric coordinates of the image of the light-emitting body in the frame image and identifies the radius and area of the image of the light-emitting body.
When multiple users use input devices 20 so that multiple light-emitting bodies 22 are found in the frame image, the image processing unit 82 generates multiple binarized images using threshold values that depend on the emitted color of the respective light-emitting bodies 22, so as to identify the position and size of the images of the respective light-emitting bodies. The image processing unit 82 delivers the position and size of the image of the respective light-emitting body thus identified to the device information processing unit 90.
When the position and size of the image of the light-emitting body identified by the image processing unit 82 are acquired, the device information processing unit 90 derives the positional information of the input device 20 as viewed from the imaging device 14. Further, the device information processing unit 90 acquires the sensor detection information from the input acknowledging unit 88 and derives the information on the orientation of the input device 20 in the real space.
Once the reference orientation is registered, the orientation information deriving unit 94 derives the current orientation information by comparing the acquired sensor detection information with the reference orientation information. Referring to
The positional information deriving unit 92 acquires the position and size of the image of the light-emitting body. The positional information deriving unit 92 derives the positional coordinates in the frame image by referring to the barycentric coordinates of the image of the light-emitting body. The unit 92 further derives information on the distance from the imaging device 14 by referring to the radius and area of the image of the light-emitting body. The positional information deriving unit 92 may be provided with a table mapping the barycentric position coordinate in the frame image, the radius (or area) of the image of the light-emitting body, and the distance, and may derive the distance information by referring to the table. The positional information deriving unit 92 uses the barycentric position coordinate in the frame image and the radius (or area) of the image of the light-emitting body to derive the distance information by computation. The positional information deriving unit 92 uses the positional coordinate in the frame image and the derived distance information to derive the information on the position of the light-emitting body 22 in the real space.
When the orientation information deriving unit 94 derives the orientation information of the input device 20 and when the positional information deriving unit 92 derives the positional information of the light-emitting body 22, the control information acquisition unit 96 receives the derived information and acquires the control information of the input device 20 accordingly. For example, the control information of the input device 20 acquired by the control information acquisition unit 96 includes the following items.
The control information acquisition unit 96 acquires the angle of inclination of the input device 20, the speed at which the inclination changes, the roll angle, pitch angle, and the yaw angle from the sensor detection information or the orientation information derived in the orientation information deriving unit 94. Similarly, the control information acquisition unit 96 determines the moving speed and acceleration of the handle 24 from the sensor detection information or the orientation information. The control information acquisition unit 96 determines the position, moving speed, and acceleration of the light-emitting body 22 from the positional information of the light-emitting body derived in the positional information deriving unit 92. Further, the control information acquisition unit 96 has information of the relative position of the light-emitting body 22 and the handle 24 forming the input device 20. For example, by acquiring the orientation information of the input device 20 and the positional information of the light-emitting body 22, the control information acquisition unit 96 can acquire the positional information (e.g., the positional information of the barycenter) on the handle 24 by referring to the relative position of the light-emitting body 22 and the handle 24. The position, moving speed, and acceleration of the light-emitting body 22 may be determined from the sensor detection information or the orientation information derived in the orientation information deriving unit 94.
In the embodiment, the positional information of the light-emitting body 22 and the positional information of the handle 24 are dealt with as the positional information of the input device 20. Similarly, the moving speed information of the light-emitting body 22 and the moving speed information of the handle 24 are dealt with as the moving speed information of the input device 20. The acceleration information of the light-emitting body 22 and the acceleration information of the handle 24 are dealt with as the acceleration information of the input device 20.
As described above, the control information acquisition unit 96 acquires the control information of the input device 20 by referring to the positional information of the input device 20, the sensor detection information, and/or the orientation information of the input device 20. The control information acquisition unit 96 delivers the acquired control information to the application processing unit 100. The application processing unit 100 receives the control information of the input device 20 as a user command in the game.
The application processing unit 100 uses the control information and button status information of the input device 20 to advance the game, and generates an image signal indicating the result of processing the game application. The image signal is sent from the output unit 84 to the display device 12 and output as a displayed image.
The user command acknowledging unit 102 acknowledges control information of the input device 20 from the device information processing unit 90 and the button state information from the input acknowledging unit 88 as user commands. The control unit 110 runs the game and advances the game in accordance with the user commands acknowledged by the user command acknowledging unit 102. The parameter storage unit 150 stores parameters necessary for the progress of the game. The three-dimensional data storage unit 152 stores three-dimensional data forming the game space. The display control unit 116 controls the camera to render a three-dimensional game space in accordance with the movement of a game object and causes the image generation unit 156 to generate a display screen. The image generation unit 156 sets the viewpoint position and viewing direction of the camera in the game space and renders the three-dimensional data so as to generate a display screen representing the game space controlled by the control unit 110, i.e., a display screen that reflects the action of the game object.
The behavior table storage unit 154 maintains correspondence between certain types of behavior (action) of the game object and conditions for operation of the input device 20 (operations required to trigger the behaviors). A condition for operation is a condition to make the object to perform a corresponding behavior (action). The condition for operation may comprise a plurality of individual conditions. The behavior table storage unit 154 according to the embodiment maintains behaviors of a game object and conditions for operation in a table format. The unit 154 may maintain the correspondence in another format. The behavior table storage unit 154 may employ any format so long as behaviors of a game object and conditions for operation are mapped to each other. The condition determination unit 112 determines whether the control information acknowledged by the user command acknowledging unit 102 meets a condition stored in the behavior table storage unit 154.
The obstacle 202 is a stationary object. The position of the obstacle 202 in the three-dimensional space is defined by the data stored in the three-dimensional data storage unit 152. The display control unit 116 determines the viewing direction and viewpoint position of the virtual camera in the game space defined in a world coordinate system, in accordance with the direction of movement of the game object 200 and the action performed that are determined by a user input command. The image generation unit 156 locates the virtual camera at the viewing position thus determined, renders the three-dimensional data stored in the three-dimensional data storage unit 152, aligning the light axis of the virtual camera in the viewing direction thus determined, and generates a display screen determined by the action of the game object.
<Turn Action 1>“Turn to left” is an action of turning the steering wheel to the left. When the yaw angle is in a range “−60°<YAW(yaw angle)<−2°, the condition determination unit 112 determines that the condition to trigger “turn to left” is met.
The image generation unit 156 generates a display screen showing the vehicle turning to the left.
<Turn Action 2>
“Turn to right” is an action of turning the steering wheel to the right. When the yaw angle is in a range “2°<YAW(yaw angle)<60°, the condition determination unit 112 determines that the condition to trigger “turn to right” is met.
<Dash Action>
“Dash” is an action in which the vehicle speed is increased dramatically for a moment. The condition that triggers “dash” comprises a plurality of individual conditions that are bound by an operator “&”. The operator “&” signifies that the conditions should be met concurrently. By setting a condition for operation including a plurality of individual conditions for a single action, game applications that require user experience can be produced.
The first condition (−35°<PITCH<35°) requires that the pitch angle β of the input device 20 is within the range (−35°<PITCH (pitch)angle)<35°). The second condition (−40°<YAW(yaw)angle)<40°) requires that the yaw angle α of the input device 20 is within the range (−40°<YAW(yaw)angle)<40°). The third condition (HANDLEPOS(Z)<−150 mm:TIME;1.0 s) is decomposed into (HANDLEPOS(Z)<−150 mm) and (TIME:1.0 s). HANDLEPOS(Z)<−150 mm requires that the amount of movement of the handle 24 in the negative direction aligned with the Z-axis is more than 150 mm, and (TIME:1.0 s) requires that the movement is performed within one second. In other words, the third condition requires that the handle 24 be moved for a distance longer than 150 mm in the negative direction aligned with the Z-axis within one second. In the game system 1 according to the embodiment, the user generates a user command using the input device 20. Therefore, the time limit imposed on the manipulation of the input device 20 prompts the user to perform an speedy action and enjoy lively game operation.
The condition determination unit 112 determines whether the individual conditions are met on an individual basis. For a “dash” action to take place, the third condition requires that the input device 20 is moved by a predetermined amount within a predetermined time. More specifically, the third condition requires that the movement of the handle 24 in the negative direction aligned with the Z-axis exceed 150 mm within one second. When the direction of movement of the handle 24 changes from the positive direction to the negative direction aligned with the Z-axis, or when the handle 24 begins to be moved in the negative direction, leaving the stationary state aligned with the Z-axis, the condition determination unit 112 detects the start of movement in the negative direction aligned with the Z-axis and starts measuring the time elapsed from the point of start of the movement. The movement of the handle 24 in the Z-axis direction is detected by referring to the positional information of the handle 24. More specifically, the condition determination unit 112 monitors the positional information of the handle 24 acquired as the control information. The condition determination unit 112 identifies the start of movement when the Z-coordinate value, which is found among the sequentially acquired positional information, changes to negative direction. When the amount of movement of the handle 24 goes beyond 150 mm within one second, the condition determination unit 112 determines that the third condition is met.
In this embodiment, the third condition may include the requirement requiring that the handle 24 moves continuously in the negative direction aligned with the Z-axis within one second. The requirement for continuous movement signifies that the Z-coordinate values found in the sequentially acquired positional information should remain unchanged or change in the negative direction (i.e., do not change in the positive direction). Therefore, when the handle 24 is moved in the positive direction aligned with the Z-axis before the amount of movement exceeds 150 mm, the amount of movement is canceled and set to 0. Thus, when the input device 20 is moved in the positive direction before the amount of movement exceeds 150mm since the start of movement in the negative direction aligned with the Z-axis, the condition determination unit 112 determines that the condition is not met upon detecting the movement in the positive direction.
Thus, when the condition determination unit 112 detects that the input device 20 is continuously moved for a distance longer than 150 mm within a predetermined period of time (in this case, one second) while the user keeps sticking the handle 24 in front without retracting the handle 24 toward the user, the condition determination unit 112 determines that the third condition is met. Meanwhile, when the input device 20 is moved in the positive direction aligned with the Z-axis before continuously moving 150 mm in the negative direction aligned with the Z-axis since the start of movement, the condition determination unit 112 determines at that point of time that the third condition is not met. When the input device 20 begins to be moved in the negative direction again, the condition determination unit 112 starts measuring time so as to monitor whether the third condition is met.
The condition determination unit 112 determines whether the individual conditions for triggering “dash” are met on an individual basis. The individual conditions are bound by an operator “&”, signifying that the conditions should be met concurrently to initiate the “dash” action. For each of the first through third conditions, the condition determination unit 112 sets a flag to 1 when the condition is met, and sets the flag to 0 when the condition is not met. When the flags of all individual conditions are set to 1, the condition determination unit 112 determines that all individual conditions are met and informs the object control unit 114 of the fact that the condition for triggering “dash” is met. The object control unit 114 causes the game object 200 to perform “dash”. The image generation unit 156 generates a display screen showing the vehicle speed to increase instantaneously.
<Jump Action>
A jump action is an action of a vehicle jumping. For example, the vehicle can avoid collision with the obstacle 202 by jumping. The condition that triggers “jump” comprises a plurality of individual conditions that are bound by an operator “$”. The operator “$” signifies that one of the individual conditions need be met at the present moment. The timing of meeting the conditions may differ. In other words, once a given individual condition is met, the state of the condition being met is maintained until the other individual conditions are met.
The first condition (SPHEREPOS(Z)>150 mm:TIME;1.0 s) is decomposed into (SPHEREPOS(y)>150 mm) and (TIME:1.0 s). SPHEREPOS(y)>150 mm requires that the amount of movement of the light-emitting body 22 in the positive direction aligned with the Y-axis is more than 150 mm, and (TIME:1.0 s) requires that the movement is performed within one second. In other words, the first condition requires that the light-emitting body 22 be moved higher than 150 mm in the positive direction aligned with the Y-axis within one second. The second condition (PITCH>65) requires that the pitch angle of the input device 20 is more than 65°.
The condition determination unit 112 determines whether the individual conditions for triggering “dash” are met on an individual basis. For a “jump” action to take place, the first condition requires that the input device 20 is moved by a predetermined amount within a predetermined time. More specifically, the first condition requires that the movement of the light-emitting body 22 in the positive direction aligned with the Y-axis exceed 150 mm within one second. When the direction of movement of the light-emitting body 22 changes from the positive direction to the negative direction aligned with the Y-axis, or when the light-emitting body 22 begins to be moved in the positive direction, leaving the stationary state aligned with the Y-axis, the condition determination unit 112 detects the start of movement in the positive direction aligned with the Y-axis and starts measuring the time elapsed from the point of start of the movement. The movement of the light-emitting body 22 in the Y-axis direction is detected by referring to the positional information of the light-emitting body 22. More specifically, the condition determination unit 112 monitors the positional information of the light-emitting body 22 acquired as the control information. The condition determination unit 112 identifies the start of movement when the Y-coordinate value, which is found among the sequentially acquired positional information, changes to positive direction. When the amount of movement of the light-emitting body 22 goes beyond 150 mm within one second, the condition determination unit 112 determines that the first condition is met.
As mentioned before, the positional information of the light-emitting body 22 is derived by the positional information deriving unit 92. The positional information deriving unit 92 is described above as deriving the positional information from the position and size of the image of the light-emitting body. For example, the positional information deriving unit 92 may derive positional information by deriving the amount of movement by integrating the value detected by the three-axis acceleration sensor 56 twice. The positional information deriving unit 92 may use both the image of the light-emitting body and the detected acceleration value to derive the positional information of the light-emitting body 22 so that the precision of derivation is improved. Without an input of the image or the value (e.g., when the light-emitting body 22 is shielded from view so that the image of the light-emitting body cannot be acquired), the positional information of the light-emitting body 22 may be derived only from the detected value of acceleration.
In this embodiment, the first condition may include the requirement requiring that the light-emitting body 22 move continuously in the positive direction aligned with the Y-axis within one second. The requirement for continuous movement signifies that the Y-coordinate values found in the sequentially acquired positional information should remain unchanged or change in the positive direction (i.e., do not change in the negative direction). Therefore, when the light-emitting body 22 is moved in the negative direction aligned with the Y-axis before the amount of movement exceeds 150 mm, the amount of movement is canceled and set to 0. Thus, when the input device 20 is moved in the negative direction before the amount of movement exceeds 150 mm since the start of movement in the positive direction aligned with the Y-axis, the condition determination unit 112 determines that the condition is not met upon detecting the movement in the negative direction.
Thus, when the condition determination unit 112 detects that the light-emitting body 22 is moved continuously for a distance longer than 150 mm within a predetermined period of time (in this case, one second) while the user keeps the light-emitting body 22 raised higher than 150 mm, the condition determination unit 112 determines that the first condition is met. Meanwhile, when the light-emitting body 22 is moved in the negative direction aligned with the Y-axis before continuously moving 150 mm in the positive direction aligned with the Y-axis since the start of movement, the condition determination unit 112 determines at that point of time that the first condition is not met. When the light-emitting body 22 begins to be moved in the positive direction again, the condition determination unit 112 starts measuring time so as to monitor whether the first condition is met.
The condition determination unit 112 determines whether the individual conditions are met on an individual basis. The conditions are bound by an operator “$”. Once a given individual condition is met, the state of the condition being met is maintained until the other individual conditions are met. For each of the first and second conditions, the condition determination unit 112 sets a flag to 1 when the condition is met, and sets the flag to 0 when the condition is not met. In the case that the conditions are bound by the operator “$”, once a given condition is met so that the flag is set by the condition determination unit 112 to 1 accordingly, the flag value of the given condition is maintained until the other condition is met so that the flag for the other condition is set to 1 accordingly, initiating a “jump” action. For example, as shown in
In this process, the color setting unit 118 changes the color emitted by the light-emitting body 22 so as to let the user know that the condition for triggering “jump” is met. The color setting unit 118 generates a command to emit light of a color different from the color currently emitted and causes the wireless communication module 86 to transmit the command. In response to the command to emit light, the wireless communication module 48 supplies the command to the main control unit 52, whereupon the main control unit 52 supplies the command to the light-emission control unit 60. The light-emission control unit 60 controls light-emission from the red LED 64a, green LED 64b, blue LED 64c so as to cause the light-emitting unit 62 to emit light of a designated color.
To initiate a “jump” action, the user needs to manipulate the input device 20 so as to meet the condition for operation. In a related type of button operation, an action can be initiated only by pressing a button. The user manipulating the input device 20 according to the embodiment cannot recognize whether the manipulation of the input device 20 meets condition for operation. It is therefore preferable to make the user recognize that the condition for operation to initiate a jump is met by allowing the color setting unit 118 to change the emitted color. In this process, the color setting unit 118 may set different colors for different actions so that the user can know that the manipulation is correct by seeing the color of the light-emitting body 22.
The condition determination unit 112 determines whether the conditions for operation to initiate the actions shown in
When a plurality of individual conditions are bound by the operator “$” as in the case of the condition to trigger “jump”, the individual conditions may be met while another action is being performed. For example, when the light-emitting body 22 is raised higher than 150 mm in the Y-axis direction within one second while a “dash” is being performed, the first condition included in the condition for operation is met. In the case that a plurality of individual conditions are bound by the operator “$”, the state of the first condition being met is maintained until the second condition is met. Therefore, the user only has to fulfill the second condition to cause the game object 200 to jump. Similarly, when the pitch angle β of the input device 20 is caused to be larger than 65° while another action is being performed, the second condition is met so that the user only has to fulfill the first condition in order to cause the game object 200 to jump. The user may fulfill one of the individual conditions irrespective of whether another action is being performed and manipulate the input device 20 to fulfill the remaining individual condition when the user desires to perform a jump action. Thus, by associating a plurality of conditions with each other using the operator “$”, the user can determine the timing that each condition is met at will. Therefore, a novel game play that matches the user skill can be created.
The color setting unit 118 may change the color emitted by the light-emitting body 22 so as to let the user know which of the plurality of individual conditions constituting the condition for triggering “jump” is met. The color setting unit 118 assigns different colors to different individual conditions. For example, the color setting unit 118 may cause the light-emitting body 22 to emit green when the first condition is met and blue when the second condition is met. This lets the user know which condition is already met and which condition should be met in order to initiate a “jump” action.
The condition determination unit 112 determines whether the first condition is met by referring to the control information (S10). When the first condition is met (Y in S10), the condition determination unit 112 sets the first flag to 1 (S12). When the first condition is not met (N in S10), the first flag is set to 0 (S14). Concurrently, the condition determination unit 112 determines whether the second condition is met by referring to the control information (S16). When the second condition is met (Y in S16), the condition determination unit 112 sets the second flag to 1 (S18). When the second condition is not met (N in S16), the second flag is set to 0 (S20).
The condition determination unit 112 determines whether both the first and second flags are set to 1 (S22). When both flags are set to 1 (Y in S22), the object control unit 114 performs an associated action (S24). Meanwhile, when at least one of the first and second flags is set to 0 (N in S22), steps beginning with S10 are performed again.
As described, according to the flow of determination on condition, the action is not performed unless the first and second conditions are met concurrently. Accordingly, actions that require user skill are implemented.
The condition determination unit 112 determines whether the first condition is met by referring to the control information (S40). When the first condition is met (Y in S40), the condition determination unit 112 sets the first flag to 1 (S42). When the first condition is not met (N in S40), the first flag remains unchanged. Concurrently, the condition determination unit 112 determines whether the second condition is met by referring to the control information (S44). When the second condition is met (Y in S44), the condition determination unit 112 sets the second flag to 1 (S46). When the second condition is not met (N in S44), the second flag remains unchanged.
The condition determination unit 112 determines whether both the first and second flags are set to 1 (S48). When both flags are set to 1 (Y in S48), the object control unit 114 performs an associated action (S50) and sets the first and second flags to 0 (S52). Meanwhile, when at least one of the first and second flags is set to 0 (N in S48), steps beginning with S40 are performed again.
As described, according to the flow of determination on condition, once a condition is met, the flag continues to be set to 1 even if the condition is not met any more subsequently. By maintaining a state in which a condition is met, the user can fulfill the last condition at a desired point of time and control the game object 200 at will.
Described above is an explanation based on an exemplary embodiment. The embodiment is intended to be illustrative only and it will be obvious to those skilled in the art that various modifications to constituting elements and processes could be developed and that such modifications are also within the scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
2010196333 | Sep 2010 | JP | national |