Japanese Patent Application No. 2010-16083, filed on Jan. 27, 2010, is hereby incorporated by reference in its entirety.
The present invention relates to an information storage medium, a game system, and a display image generation method.
A game system that implements a fitness game has been known (see JP-A-10-207619, for example). Such a game system displays a movement instruction image for the player on a display section, for example.
However, it may be difficult for the player to observe an object displayed in the display image depending on the position of the player in the real space. For example, when the player is positioned away from the display section in the real space, the player can only observe a small object as compared with the case where the player is positioned near the display section. Since a fitness game requires a certain space for the player to move his body, the player is generally positioned at a distance from the display section. Therefore, the player may have difficulty in reliably observing the instructions displayed in the display image.
According to a first aspect of the invention, there is provided a non-transitory computer-readable information storage medium storing a program that generates a display image to be displayed on a display section, the program causing a computer to function as:
an acquisition section that acquires an input image from an input section that applies light to a body and receives reflected light from the body;
an object control section that controls the size of an object in a virtual space based on a distance between the input section and the body, the distance being determined based on the input image; and
an image generation section that generates a display image including the object.
According to a second aspect of the invention, there is provided a game system that generates a display image to be displayed on a display section, the game system comprising:
an acquisition section that acquires an input image from an input section that applies light to a body and receives reflected light from the body;
an object control section that controls the size of an object in a virtual space based on a distance between the input section and the body, the distance being determined based on the input image; and
an image generation section that generates a display image including the object.
According to a third aspect of the invention, there is provided a display image generation method that is implemented by a game system that generates a display image to be displayed on a display section, the method comprising:
acquiring an input image from an input section that applies light to a body, and receives reflected light from the body;
controlling the size of an object in a virtual space based on a distance between the input section and the body, the distance being determined based on the input image; and
generating a display image including the object.
The invention may provide an information storage medium, a game system, and a display image generation method that can generate a display image that can be easily observed by the player.
(1) One embodiment of the invention relates to a non-transitory computer-readable information storage medium storing a program that generates a display image to be displayed on a display section, the program causing a computer to function as:
an acquisition section that acquires an input image from an input section that applies light to a body and receives reflected light from the body;
an object control section that controls the size of an object in a virtual space based on a distance between the input section and the body, the distance being determined based on the input image; and
an image generation section that generates a display image including the object.
Another embodiment of the invention relates to a game system including the above sections.
According to the above information storage medium and game system, it is possible to generate a display image that can be easily observed by a player, since the size of an object in a virtual space is controlled based on a distance between the input section and the body, the distance being determined based on the input image,
(2) In the above information storage medium or game system,
the object control section may increase a scaling factor of the object as the distance increases.
Specifically, since the object is scaled up when the player has moved away from the input section, a display image that can be easily observed by the player can be generated.
(3) In the above information storage medium or game system,
the object control section may reduce a scaling factor of the object as the distance decreases.
Specifically, since the object is scaled down when the player has approached the input section, it is possible to generate a display image including an object that has an appropriate size and can be easily observed by the player even if the player is positioned near the input section.
(4) In the above information storage medium or game system,
the object control section may control a degree by which the scaling factor of the object is changed with the lapse of time based on the distance.
Specifically, since the degree by which the scaling factor of the object is changed with the lapse of time is controlled based on the distance, the size of the object can be changed by a degree that allows the player to easily observe the object.
(5) In the above information storage medium or game system,
the image generation section may generate a display image including a plurality of objects; and the object control section may control the size of a predetermined object among the plurality of objects based on the distance.
Specifically, since the size of a predetermined object is controlled based on the distance between the input section and the body, the distance being determined based on the input image, it is possible to generate a display image that allows the player to easily observe an object that provides necessary information to the player, for example.
(6) In the above information storage medium or game system,
the program may cause the computer to further function as a determination section that determines an input from the input section; and
the determination section may determine the input based on the distance.
Specifically, since an input is determined based on the distance between the input section and the body, the distance being determined based on the input image, it is possible to perform an input determination process appropriate for the player. For example, an input determination process that satisfies the player can be performed by reducing the difficulty level as the player moves away from the input section.
(7) In the above information storage medium or game system,
the program may cause the computer to further function as a movement processing section that moves the object in the virtual space; and
the movement processing section may control a moving speed of the object based on the distance.
Specifically, since the moving speed of the object is controlled based on the distance between the input section and the body, the distance being determined based on the input image, it is possible to provide a display image including an object that moves at an appropriate moving speed.
For example, the object can be easily observed if the moving speed of the object is reduced as the player moves away from the input section.
(8) In the above information storage medium or game system,
the program may cause the computer to further function as a virtual camera control section that controls a position of a virtual camera in a virtual three-dimensional space;
the virtual camera control section may control the position of the virtual camera based on the distance; and
the image generation section may generate an image viewed from the virtual camera as the display image.
Specifically, since the position of the virtual camera is controlled based on the distance between the input section and the body, the distance being determined based on the input image, it is possible to provide an appropriate image that can be easily observed by the player. For example, the object is scaled up by a perspective projection transformation process by controlling the position of the virtual camera so that the virtual camera approaches the object as the player moves away from the input section. This makes it possible to provide an image that can be easily observed by the player.
(9) In the above information storage medium or game system,
the program may cause the computer to further function as a virtual camera control section that controls an angle of view of a virtual camera in a virtual three-dimensional space;
the virtual camera control section may control the angle of view of the virtual camera based on the distance; and
the image generation section may generate an image viewed from the virtual camera as the display image.
Specifically, since the angle of view of the virtual camera is controlled based on the distance between the input section and the body, the distance being determined based on the input image, it is possible to provide an appropriate image that can be easily observed by the player. For example, the angle of view is increased (zoom out) as the player approaches the input section, and reduced (zoom in) as the player moves away from the input section. This makes it possible to generate a display image so that the object is scaled down as the player approaches the input section, and the object is scaled up as the player moves away from the input section. Therefore, an appropriate image that can be easily observed by the player can be generated.
(10) In the above information storage medium or game system,
the program may cause the computer to further function as a virtual camera control section that controls a view direction of a virtual camera in a virtual three-dimensional space;
the virtual camera control section may control the view direction of the virtual camera based on a positional relationship between the body and the input section, the positional relationship being determined based on the input image; and
the image generation section may generate an image viewed from the virtual camera as the display image.
Specifically, since the view direction of the virtual camera is controlled based on the positional relationship between the body and the input section, the positional relationship being determined based on the input image, it is possible to generate an appropriate image that can be easily observed by the player. Moreover, since the view direction of the virtual camera can be controlled in the direction in which the player observes the display section, a realistic display image can be provided.
(11) In the above information storage medium or game system,
the program may cause the computer to further function as a disposition section that disposes the object in the virtual space; and
the disposition section may determine the position of the object in the virtual space based on a positional relationship between the body and the input section, the positional relationship being determined based on the input image.
This makes it possible to provide a display image in which the object is disposed in the virtual space at an appropriate position that allows the player to easily observe the object.
(12) In the above information storage medium or game system,
the program may cause the computer to further function as a movement processing section that moves the object in the virtual space; and
the movement processing section may control a moving direction of the object in the virtual space based on a positional relationship between the body and the input section, the positional relationship being determined based on the input image.
Specifically, since the moving direction of the object is controlled based on the positional relationship between the body and the input section, the positional relationship being determined based on the input image, it is possible to generate a display image including an object that moves in an appropriate moving direction that allows the player to easily observe the object.
(13) Another embodiment of the invention relates to a display image generation method that is implemented by a game system that generates a display image to be displayed on a display section, the method including:
acquiring an input image from an input section that applies light to a body and receives reflected light from the body;
controlling the size of an object in a virtual space based on a distance between the input section and the body, the distance being determined based on the input image; and
generating a display image including the object.
Embodiments of the invention are described below. Note that the following embodiments do not unduly limit the scope of the invention as stated in the claims. Note also that all of the elements described below should not necessarily be taken as essential elements of the invention.
The acceleration sensor 210 according to this embodiment is configured as a three-axis acceleration sensor, and detects three-axis acceleration vectors. Specifically, the acceleration sensor 210 detects a change in velocity and direction within a given time as the acceleration vector of the controller along each axis.
As illustrated in
The controller 20 has a function of indicating (pointing) an arbitrary position within a display screen 91.
As illustrated in
A method of calculating the indication position of the controller 20 within the display screen 91 is described below with reference to
Specifically, the origin O of the captured image PA is determined to be the indication position of the controller 20. The indication position is calculated from the relative positional relationship between the origin O of the captured image PA, the positions RP and LP in the captured image PA, and a display screen area DA that is an area in the captured image PA corresponding to the display screen 91.
In the example illustrated in
The reference position recognition object is not particularly limited insofar as the indication position of the controller within the game screen can be specified. The number of light sources need not necessarily be two. It suffices that the reference position recognition object have a shape that allows the relative positional relationship with the display screen 91 to be specified. The number of reference position recognition objects may be one, or three or more.
The light sources 30R and 30L may be a light-emitting diode (LED) that emits infrared radiation (i.e., invisible light), for example. The light sources 30R and 30L are disposed at a given position with respect to the display section 90. In this embodiment, the light sources 30R and 30L are disposed at a predetermined interval.
The controller 20 includes the acceleration sensor 210, the imaging section 220, a speaker 230, a vibration section 240, a microcomputer 250, and a communication section 260.
In this embodiment, the controller 20 is used as an example of the input section. An image input section, a sound input section, or a pressure sensor may be used as the input section.
The acceleration sensor 210 detects three-axis (X axis, Y axis, and Z axis) accelerations. Specifically, the acceleration sensor 210 detects accelerations in the vertical direction (Y-axis direction), the transverse direction (X-axis direction), and the forward/backward direction (Z-axis direction). The acceleration sensor 210 detects accelerations every 5 msec. The acceleration sensor 210 may detect one-axis, two-axis, or six-axis accelerations. The accelerations detected by the acceleration sensor are transmitted to the game machine 10 through the communication section 260.
The imaging section 220 includes an infrared filter 222, a lens 224, an imaging element (image sensor) 226, and an image processing circuit 228. The infrared filter 222 is disposed on the front side of the controller, and allows only infrared radiation contained in light incident from the light sources 30R and 30L (disposed at a given position with respect to the display section 90) to pass through. The lens 224 condenses the infrared radiation that has passed through the infrared filter 222, and emits the infrared radiation to the imaging element 226. The imaging element 226 is a solid-state imaging element such as a CMOS sensor or a CCD. The imaging element 226 images the infrared radiation condensed by the lens 224 to generate a captured image. The image processing circuit 228 processes the captured image generated by the imaging element 226. For example, the image processing circuit 228 processes the captured image generated by the imaging element 226 to detect a high-luminance component, and detects light source position information (specified position) within the captured image. The detected position information is transmitted to the game machine 10 through the communication section 260.
The speaker 230 outputs sound acquired from the game machine 10 through the communication section 260. In this embodiment, the speaker 230 outputs confirmation sound and effect sound transmitted from the game machine 10.
The vibration section (vibrator) 240 receives a vibration signal transmitted from the game machine 10, and operates based on the vibration signal.
The microcomputer 250 outputs sound or operates the vibrator based on data received from the game machine 10. The microcomputer 250 causes the communication section 260 to transmit the accelerations detected by the acceleration sensor 210 to the game machine 10, or causes the communication section 260 to transmit the position information detected by the imaging section 220 to the game machine 10.
The communication section 260 includes an antenna and a wireless module, and exchanges data with the game machine 10 via wireless communication using the Bluetooth (registered trademark) technology, for example. The communication section 260 according to this embodiment transmits the accelerations detected by the acceleration sensor 210, the position information detected by the imaging section 220, and the like to the game machine 10 at alternate intervals of 4 msec and 6 msec. The communication section 260 may be connected to the game machine 10 via a communication cable, and may exchange information with the game machine 10 via the communication cable.
The controller 20 may include operating sections such as a lever (analog pad), a mouse, and a touch panel display in addition to the arrow key 271 and the button 272. The controller 20 may include a gyrosensor that detects an angular velocity applied to the controller 20.
The game machine 10 according to this embodiment is described below. The game machine 10 according to this embodiment includes a storage section 170, a processing section 100, an information storage medium 180, and a communication section 196.
The storage section 170 serves as a work area for the processing section 100, the communication section 196, and the like. The function of the storage section 170 may be implemented by hardware such as a RAM (VRAM).
The storage section 170 according to this embodiment includes a main storage section 171, a drawing buffer 172, a determination information storage section 173, and a sound data storage section 174. The drawing buffer 172 stores an image generated by an image generation section 120.
The determination information storage section 173 stores determination information. The determination information includes information for the timing determination section 114A to perform the determination process in synchronization with the music data reproduction time, such as the reference start/end timing, the reference determination period (i.e., determination period) from the reference start timing to the reference end timing, the auxiliary start/end timing, and the auxiliary determination period (i.e., determination period) from the auxiliary start timing to the auxiliary end timing. For example, the determination information storage section 173 stores the reference start/end timing of the reference determination period and the auxiliary start/end timing of the auxiliary determination period in synchronization with the reproduction time when the reproduction start time is “0”.
The determination information stored in the determination information storage section 173 includes defined input information (model input information) corresponding to each determination process performed by the input information determination section 114B. The defined input information may be a set of x, y, and z-axis accelerations (defined acceleration group) corresponding to the determination period of each determination process.
The auxiliary determination period may end at the end timing of the reference determination period corresponding to the auxiliary determination period. When a first reference determination period that starts from a first reference start timing, and a second reference determination period that starts from a second reference start timing that occurs after the first reference start timing are defined so as not to overlap, the auxiliary determination period corresponding to the first reference determination period may end before the second reference start timing.
The sound data storage section 174 stores music data, effect sound, and the like.
The processing section 100 performs various processes according to this embodiment based on data read from a program stored in the information storage medium 180. Specifically, the information storage medium 180 stores a program that causes a computer to function as each section according to this embodiment (i.e., a program that causes a computer to perform the process of each section).
The communication section 196 can communicate with another game machine through a network (Internet). The function of the communication section 196 may be implemented by hardware such as a processor, a communication ASIC, or a network interface card, a program, or the like. The communication section 196 can perform cable communication and wireless communication.
The communication section 196 includes an antenna and a wireless module, and exchanges data with the communication section 260 of the controller 20 using the Bluetooth (registered trademark) technology, for example. For example, the communication section 196 transmits sound data (e.g., confirmation sound and effect sound) and the vibration signal to the controller 20, and receives the information (e.g., acceleration vector and pointing position) detected by the acceleration sensor and the image sensor of the controller 20 at alternate intervals of 4 msec and 6 msec.
A program that causes a computer to function as each section according to this embodiment may be distributed to the information storage medium 180 (or the storage section 170) from a storage section or an information storage medium included in a server through a network. Use of the information storage medium included in the server is also included within the scope of the invention.
The processing section 100 (processor) performs a game process, an image generation process, and a sound control process based on the information received from the controller 20, a program loaded into the storage section 170 from the information storage medium 180, and the like.
The processing section 100 according to this embodiment performs various game processes. For example, the processing section 100 starts the game when game start conditions have been satisfied, proceeds with the game, finishes the game when game finish conditions have been satisfied, and performs an ending process when the final stage has been cleared. The processing section 100 also reproduces the music data stored in the sound data storage section 174.
The processing section 100 according to this embodiment functions as an acquisition section 110, a disposition section 111, a movement/motion processing section 112, an object control section 113, a determination section 114, an image generation section 120, and a sound control section 130.
The acquisition section 110 acquires input information received from the input section (controller 20). For example, the acquisition section 110 acquires three-axis accelerations detected by the acceleration sensor 210.
The disposition section 111 disposes an object in a virtual space (virtual three-dimensional space (object space) or virtual two-dimensional space). For example, the disposition section 111 disposes a display object (e.g., building, stadium, car, tree, pillar, wall, or map (topography)) in the virtual space in addition to a character and an instruction object. The virtual space is a virtual game space. For example, the virtual three-dimensional space is a space in which an object is disposed at three-dimensional coordinates (X, Y, Z) (e.g., world coordinate system or virtual camera coordinate system).
For example, the disposition section 111 disposes an object (i.e., an object formed by a primitive (e.g., polygon, free-form surface, or subdivision surface)) in the world coordinate system. The disposition section 111 determines the position and the rotation angle (synonymous with orientation or direction) of the object in the world coordinate system, and disposes the object at the determined position (X, Y, Z) and the determined rotation angle (rotation angles around the X, Y, and Z-axes). The disposition section 111 may dispose a scaled object in the virtual space.
The movement/motion processing section 112 calculates the movement/motion of the object in the virtual space. Specifically, the movement/motion processing section 112 causes the object to move in the virtual space or to make a motion (animation) based on the input information received from the input section, a program (movement/motion algorithm), various types of data (motion data), and the like. More specifically, the movement/motion processing section 112 sequentially calculates movement information (e.g., moving speed, acceleration, position, and direction) and motion information (i.e., the position or the rotation angle of each part that forms the object) about the object every frame ( 1/60th of a second). The term “frame” refers to a time unit used for the object movement/motion process and the image generation process.
When moving the object in the virtual two-dimensional space, the movement/motion processing section 112 may move the object (e.g., instruction mark) in a given moving direction at a predetermined moving speed.
The object control section 113 controls the size of the object. For example, the object control section 113 scales up/down (enlarges or reduces) a modeled object (scaling factor: 1). The object control section 113 changes the scaling factor of the object with the lapse of time.
Specifically, the object control section 113 changes the scaling factor of the object from 1 to 2 during a period from the start timing to the end timing of the reference determination period, and scales up the object based on the scaling factor that has been changed. The object control section 113 may control the degree by which the scaling factor of the object is changed with the lapse of time. For example, the object control section 113 may change the scaling factor of the object from 1 to 2 during a period from the start timing to the end timing of the reference determination period, or may change the scaling factor of the object from 1 to 3 during a period from the start timing to the end timing of the reference determination period.
The determination section 114 includes a timing determination section 114A and an input information determination section 114B.
The timing determination section 114A determines whether or not an input start timing coincides with a reference start timing (a model start timing). The timing determination section 114A also determines whether or not the input start timing coincides with an auxiliary start timing that is defined based on the reference start timing and differs from the reference start timing.
When a plurality of auxiliary start timings that differ from each other are defined corresponding to the reference start timing, the timing determination section 114A determines whether or not the input start timing coincides with each of the plurality of auxiliary start timings. Specifically, the timing determination section 114A determines whether or not the input start timing coincides with the auxiliary start timing at each of the plurality of auxiliary start timings. The timing determination section 114A may determine whether or not the input start timing coincides with the auxiliary start timing at one or more of the plurality of auxiliary start timings. The timing determination section 114A may determine whether or not the input start timing coincides with the auxiliary start timing at one of the plurality of auxiliary start timings.
When the input start timing coincides with the reference start timing, the timing determination section 114A determines whether or not the input end timing coincides with the end timing of the reference determination period. When the input start timing coincides with the auxiliary start timing, the timing determination section 114A determines whether or not the input end timing coincides with the end timing of an auxiliary determination period.
The input information determination section 114B determines whether or not input information that has been input during a given reference determination period that starts from the reference start timing coincides with defined input information. The input information determination section 114B may determine whether or not the input information that has been input during a given reference determination period (a given model determination period) that starts from the reference start timing coincides with the defined input information when the input start timing coincides with the reference start timing.
The input information determination section 114B also determines whether or not input information that has been input during a given auxiliary determination period that starts from the auxiliary start timing coincides with the defined input information. The input information determination section 114B may determine whether or not the input information that has been input during a given auxiliary determination period that starts from the auxiliary start timing coincides with the defined input information when the input start timing coincides with the auxiliary start timing.
When a plurality of auxiliary start timings that differ from each other are defined corresponding to the reference start timing, the input information determination section 114B determines whether or not the input information that has been input during a given auxiliary determination period that starts from the auxiliary start timing that has been determined by the timing determination section 114A to coincide with the input start timing, coincides with the defined input information.
When a plurality of pieces of defined input information that differ from each other are defined corresponding to the reference determination period, the input information determination section 114B determines whether or not the input information that has been input during the reference determination period that starts from the reference start timing coincides with at least one of the plurality of pieces of defined input information when the timing determination section 114A has determined that the input start timing coincides with the reference start timing.
When a plurality of pieces of defined input information that differ from each other are defined corresponding to the auxiliary determination period, the input information determination section 114B determines whether or not the input information that has been input during the auxiliary determination period that starts from the auxiliary start timing coincides with at least one of the plurality of pieces of defined input information when the timing determination section 114A has determined that the input start timing coincides with the auxiliary start timing.
When the defined input information includes a defined acceleration group (a model acceleration group) including a plurality of accelerations, the input information determination section 114B performs the following process. Specifically, the input information determination section 114B determines whether or not an acceleration group including a plurality of accelerations detected from the input section during the reference determination period coincides with the defined acceleration group when the input start timing coincides with the reference start timing, and determines whether or not an acceleration group including a plurality of accelerations detected from the input section during the auxiliary determination period coincides with the defined acceleration group when the input start timing coincides with the auxiliary start timing.
When a defined moving path (a model moving path) is used as the defined input information, the input information determination section 114B performs the following process. Specifically, the input information determination section 114B determines whether or not a moving path detected from the input section during the reference determination period coincides with the defined moving path when the input start timing coincides with the reference start timing, and determines whether or not a moving path detected from the input section during the auxiliary determination period coincides with the defined moving path when the input start timing coincides with the auxiliary start timing.
When the defined input information includes a defined moving vector that defines the moving amount and the moving direction of a feature point between images, the input information determination section 114B performs the following process. Specifically, the input information determination section 114B determines whether or not a moving vector between a plurality of input images acquired from the input section during the reference determination period coincides with the defined moving vector when the input start timing coincides with the reference start timing, and determines whether or not a moving vector between a plurality of input images acquired from the input section during the auxiliary determination period coincides with the defined moving vector when the input start timing coincides with the auxiliary start timing.
The image generation section 120 performs a drawing process based on the results of various processes performed by the processing section 100 to generate an image, and outputs the generated image to the display section 90. For example, the image generation section 120 according to this embodiment generates an image that instructs the reference start timing and the reference determination period.
The image generation section 120 receives object data (model data) including vertex data (e.g., vertex position coordinates, texture coordinates, color data, normal vector, or alpha-value) about each vertex of the object (model), and performs a vertex process (shading using a vertex shader) based on the vertex data included in the received object data. When performing the vertex process, the image generation section 120 may optionally perform a vertex generation process (tessellation, curved surface division, or polygon division) for subdividing the polygon.
In the vertex process, the image generation section 120 performs a vertex movement process and a geometric process such as coordinate transformation (e.g., world coordinate transformation or viewing transformation (camera coordinate transformation), clipping, perspective transformation (projection transformation), and viewport transformation based on a vertex processing program (vertex shader program or first shader program), and changes (updates or adjusts) the vertex data about each vertex that forms the object based on the processing results.
The image generation section 120 then performs a rasterization process (scan conversion) based on the vertex data changed by the vertex process so that the surface of the polygon (primitive) is linked to pixels. The image generation section 120 then performs a pixel process (shading using a pixel shader or a fragment process) that draws the pixels that form the image (fragments that form the display screen). In the pixel process, the image generation section 120 determines the drawing color of each pixel that forms the image by performing various processes such as a texture reading (texture mapping) process, a color data setting/change process, a translucent blending process, and an anti-aliasing process based on a pixel processing program (pixel shader program or second shader program), and outputs (draws) the drawing color of the object subjected to perspective transformation to the image buffer 172 (i.e., a buffer that can store image information in pixel units; VRAM or rendering target). Specifically, the pixel process includes a per-pixel process that sets or changes the image information (e.g., color, normal, luminance, and alpha-value) in pixel units. The image generation section 120 thus generates an image viewed from the virtual camera (given viewpoint) in the object space. When a plurality of virtual cameras (viewpoints) are provided, the image generation section 120 may generate an image so that images (divided images) viewed from the respective virtual cameras are displayed on one screen.
The vertex process and the pixel process are implemented by hardware that enables a programmable polygon (primitive) drawing process (i.e., a programmable shader (vertex shader and pixel shader)) based on a shader program written in shading language. The programmable shader enables a programmable per-vertex process and a per-pixel process so that the degree of freedom of the drawing process increases, and the representation capability can be significantly improved as compared with a fixed drawing process using hardware.
The image generation section 120 performs a geometric process, texture mapping, hidden surface removal, alpha-blending, and the like when drawing the object.
In the geometric process, the image generation section 120 subjects the object to coordinate transformation, clipping, perspective projection transformation, light source calculation, and the like. The object data (e.g. object's vertex position coordinates, texture coordinates, color data (luminance data), normal vector, or alpha-value) after the geometric process (after perspective transformation) is stored in the storage section 170.
The term “texture mapping” refers to a process that maps a texture (texel value) stored in the storage section 170 onto the object. Specifically, the image generation section 120 reads a texture (surface properties such as color (RGB) and alpha-value) from the storage section 170 using the texture coordinates set (assigned) to the vertices of the object, and the like. The image generation section 120 maps the texture (two-dimensional image) onto the object. In this case, the image generation section 120 performs a pixel-texel link process, a bilinear interpolation process (texel interpolation process), and the like.
The image generation section 120 may perform a hidden surface removal process by a Z-buffer method (depth comparison method or Z-test) using a Z-buffer (depth buffer) that stores the Z-value (depth information) of the drawing pixel. Specifically, the image generation section 120 refers to the Z-value stored in the Z-buffer when drawing the drawing pixel corresponding to the primitive of the object. The image generation section 120 compares the Z-value stored in the Z-buffer with the Z-value of the drawing pixel of the primitive. When the Z-value of the drawing pixel is a Z-value in front of the virtual camera (e.g., a small Z-value), the image generation section 120 draws the drawing pixel, and updates the Z-value stored in the Z-buffer with a new Z-value.
The term “alpha-blending” refers to a translucent blending process (e.g., normal alpha-blending, additive alpha-blending, or subtractive alpha-blending) based on the alpha-value (A value).
For example, the image generation section 120 performs a linear synthesis process on a drawing color (color to be overwritten) C1 that is to be drawn in the image buffer 172 and a drawing color (basic color) C2 that has been drawn in the image buffer 172 (rendering target) based on the alpha-value. Specifically, the final drawing color C can be calculated by “C=C1*alpha+C2*(1-alpha).
Note that the alpha-value is information that can be stored corresponding to each pixel (texel or dot), such as additional information other than the color information. The alpha-value may be used as mask information, translucency (equivalent to transparency or opacity), bump information, or the like.
The sound control section 130 performs a sound process based on the results of various processes performed by the processing section 100 to generate game sound (e.g., background music (BGM), effect sound, or voice), and outputs the generated game sound to the speaker 92.
The terminal according to this embodiment may be controlled so that only one player can play the game (single-player mode), or a plurality of players can play the game (multi-player mode). In the multi-player mode, the terminal may exchange data with another terminal through a network, and perform the game process, or a single terminal may perform the process based on the input information received from a plurality of input sections, for example.
The information storage medium 180 (computer-readable medium) stores a program, data, and the like. The function of the information storage medium 180 may be implemented by hardware such as an optical disk (CD or DVD), a magneto-optical disk (MO), a magnetic disk, a hard disk, a magnetic tape, or a memory (ROM).
The display section 90 outputs an image generated by the processing section 100. The function of the display section 90 may be implemented by hardware such as a CRT display, a liquid crystal display (LCD), an organic EL display (OELD), a plasma display panel (PDP), a touch panel display, or a head mount display (HMD).
The speaker 92 outputs sound reproduced by the sound control section 130. The function of the speaker 92 may be implemented by hardware such as a speaker or a headphone. The speaker 92 may be a speaker provided in the display section. For example, when a television set (home television set) is used as the display section, the speaker 92 may be a speaker provided in the television set.
In the first embodiment, an image including an instruction object OB1 that instructs a Karate movement is displayed on the display section 90, as illustrated in
The player performs fitness exercise as if to perform a Karate technique by moving the controllers 20A and 20B held with either hand in the real space while watching the instruction image displayed on the display section 90.
In this embodiment, the input determination process is performed on each Karate movement (e.g., half turn of the left arm) (unit), and a plurality of Karate movements are defined in advance. A reference determination period is set for each movement (e.g., half turn of the left arm), and whether or not the input start timing coincides with the start timing (reference start timing) of the reference determination period, whether or not a movement specified by the input information that has been input during the reference determination period coincides with a given movement (e.g., half turn of the left arm), and whether or not the input end timing coincides with the end timing (reference end timing) of the reference determination period are determined.
As illustrated in
In this embodiment, the input determination process is sequentially performed on the Karate movement with the lapse of time. As illustrated in
The character C is disposed in the virtual three-dimensional space, and an image viewed from the virtual camera is generated. The two-dimensional instruction object OB1, advance instruction object OB2, and moving timing marks A1 and A2 are synthesized with the generated image to generate a display image,
The game machine 10 according to this embodiment acquires accelerations detected by the acceleration sensor 210 of the controller 20 as the input information, and performs the input determination process (input evaluation process) based on the input information. Specifically, the game machine 10 determines whether or not the player has performed the Karate movement instructed by the image. The details of the input determination process according to this embodiment are described below.
In the example illustrated in
In this embodiment, when the reference start timing has been reached, whether or not the input start timing coincides with the reference start timing is determined based on the acceleration vector acquired from the controller 20.
As illustrated in
The x, y, and z-axis accelerations acquired at a reference start timing BS are compared with the accelerations (acceleration range) corresponding to the reference start timing BS to determine whether or not the input start timing coincides with the reference start timing.
For example, when the x, y, and z-axis accelerations acquired at the reference start timing BS coincide with the accelerations corresponding to the reference start timing BS, it is determined that the input start timing coincides with the reference start timing. When the x, y, and z-axis accelerations acquired at the reference start timing BS differ from the accelerations corresponding to the reference start timing BS, it is determined that the input start timing does not coincide with the reference start timing.
Likewise, whether or not an input end timing IE coincides with a reference end timing BE is also determined.
As illustrated in
In this embodiment, an acceleration group including x, y, and z-axis accelerations detected by the acceleration sensor in a predetermined cycle (every frame) during the reference determination period BP is compared with the acceleration group included in the defined input information to determine whether or not the input information that has been input during the reference determination period BP coincides with the defined input information.
For example, when it has been determined that 60% or more of the accelerations detected by the acceleration sensor during the reference determination period BP coincide with the accelerations included in the defined input information MD, it may be determined that the input information that has been input during the reference determination period BP coincides with the defined input information.
As illustrated in
In this case, the player may find it difficult to adjust the input timing to the reference start timing, and may be frustrated. In order to solve this problem, a plurality of auxiliary start timings PS1, PS2, and PS3 corresponding to the reference start timing BS are provided, as illustrated in
Specifically, the reference determination period BP and a plurality of auxiliary determination periods PP1, PP2, and PP3 are defined for a single movement (e.g., half turn of the left arm). Whether or not the input start timing IS coincides with the reference start timing BS of the reference determination period BP, the auxiliary start timing PS1 of the auxiliary determination period PP1, the auxiliary start timing PS2 of the auxiliary determination period PP2, or the auxiliary start timing PS3 of the auxiliary determination period PP3 is determined. When the input start timing IS coincides with one of the timings (BS, PS1, PS2, PS3), whether or not the input information ID that has been input during a period from the input start timing IS to the input end timing IE coincides with the defined input information MD is determined.
As illustrated in
This makes it possible to flexibly determine the input start timing even if the player has delayed moving the controller 20, or has prematurely moved the controller 20.
In this embodiment, since the input determination process is performed on a plurality of movements, it is necessary to prevent a situation in which the auxiliary determination period affects another input determination process. Therefore, as illustrated in
For example, it suffices that the auxiliary end timings PE1a, PE2a, and PE3a occur before a start timing BSb of a reference determination period BPb of the second input determination process and auxiliary start timings PS1b, PS2b, and PS3b corresponding to the reference determination period BPb.
Specifically, it suffices that the auxiliary start timings PS1b, PS2b, and PS3b occur after the end timing BEa of the reference determination period BPa of the first input determination process and the auxiliary end timings PE1a, PE2a, and PE3a corresponding to the reference determination period BPa.
This prevents a situation in which one input determination process affects another input determination process.
Note that the reference start/end timing, the auxiliary start/end timing, the reference determination period, and the auxiliary determination period are defined by the elapsed time from the music data reproduction start time. For example, the reference start/end timing, the auxiliary start/end timing, the reference determination period, and the auxiliary determination period are defined by the elapsed time provided that the music data reproduction start time is “0”.
Note that the differential period between the reference start timing and the input start timing may be measured in advance, and the auxiliary start timing and the auxiliary determination period may be set based on the differential period. For example, a differential period ZP between the reference start timing BS and the input start timing IS is acquired, as illustrated in
Note that an auxiliary determination period corresponding to each of a plurality of reference determination periods may be set based on the period ZR
As illustrated in
As illustrated in
For example, when performing the input determination process having an ID of 1, whether or not the input start timing coincides with the reference start timing BSa, the auxiliary start timing PS1a, the auxiliary start timing PS2a, or the auxiliary start timing PS3a is determined. When the input start timing coincides with the reference start timing BSa, the auxiliary start timing PS1a, the auxiliary start timing PS2a, or the auxiliary start timing PS3a, whether or not the input information that has been input during the determination period that starts from that input start timing coincides with defined input information MD1a, MD2a, or MD3a is determined.
This increases the probability that the start timings are determined to coincide, and the input information is determined to coincide with the defined input information, so that an input determination process that satisfies the player can be implemented.
As illustrated in
As illustrated in
Therefore, the player can determine the reference start timing BS and the reference end timing BE of the reference determination period BP. Moreover, the player can determine the moving path and the moving direction of the controller 20 corresponding to the defined input information MD1 during the reference determination period BP.
Note that the character C may also be moved based on the defined input information MD (i.e., the moving path of the instruction object OB1).
In this embodiment, the object is scaled up/down with the lapse of time so that the instructions indicated by the object can be easily observed.
As illustrated in
As illustrated in
The advance instruction object OB1a is scaled up based on the scaling factor that changes with the lapse of time. Therefore, since the timing when the size of the advance instruction object OB1a becomes a maximum corresponds to the input start timing, the player can instantaneously determine the input start timing. Note that an advance moving timing mark A1a may also be scaled up/down based on the scaling factor of the advance instruction object OB1a.
As illustrated in
In this embodiment, the instruction object is a two-dimensional object, but may be a three-dimensional object.
For example, when generating an image in which an instruction object that instructs a movement (e.g., forward movement) in the depth direction with respect to the virtual camera is disposed in the virtual three-dimensional space, the instruction object having a scaling factor of 1 is disposed at the reference start timing BS of the reference determination period BP. The scaling factor is increased with the lapse of time during the reference determination period BP, and the instruction object is scaled up based on the scaling factor that has been increased. The instruction object is scaled up at a scaling factor of 1.5 at the end timing BE of the reference determination period BP. Therefore, since the instructions in the depth direction can be more effectively displayed (represented) when instructing the movement in the view direction (depth direction) of the virtual camera, the player can easily determine the movement in the depth direction.
The flow of the input determination process according to this embodiment that is performed on a single movement is described below with reference to
When the input start timing coincides with the reference start timing (Y in step S1), points are added to the score of the player (step S2).
Whether or not the input information coincides with the defined input information is then determined (step S3). For example, it is determined that the input information coincides with the defined input information when the input information coincides with one of the plurality of pieces of defined input information MD1, MD2, and MD3.
Note that the input information that has been input during the determination period that starts from the timing determined to coincide with the reference start timing BS, the auxiliary start timing PS1, the auxiliary start timing PS2, or the auxiliary start timing PS3 is compared with the plurality of pieces of defined input information MD1, MD2, and MD3. Taking
When it has been determined that the input information coincides with the defined input information (Y in step S3), points are added to the score of the player (step S4).
Whether or not the input end timing coincides with the end timing of the determination period is then determined (step S5). Taking
When it has been determined that the input end timing coincides with the end timing of the determination period (Y in step S5), points are added to the score of the player (step S6).
(1) In this embodiment, the input determination process may be performed based on a signal input from the controller 20 when the arrow key 271 or the button 272 has been operated. For example, when detection of a predetermined combination of signals (e.g., signals generated when the arrow key has been operated upward, downward, rightward, and rightward) during the reference determination period has been defined as the defined input information, whether or not the first signal (up) has been input at the reference start timing, whether or not a signal corresponding to the defined input information has been input during the reference determination period before the reference end timing is reached, and whether or not the last signal (right) has been input at the reference end timing may be determined.
(2) This embodiment may be applied to a touch panel display that includes a touch panel for detecting the contact position of the player, a pointing device, or the like used as the input section. Specifically, a defined moving path that should be input during the determination period (reference determination period or auxiliary determination period) may be used as the defined input information.
A two-dimensional moving path detected by a touch panel display, a pointing device, or the like may be used as the input information, and whether or not the moving path detected from the input section during the reference determination period coincides with the defined moving path may be determined when the input start timing coincides with the reference start timing. Alternatively, whether or not the moving path detected from the input section during the reference determination period coincides with the defined moving path may be determined when the input start timing coincides with the auxiliary start timing.
A second embodiment of the invention is described below. The second embodiment is configured by applying the first embodiment. The following description focuses on the differences from the first embodiment, additional features of the second embodiment, and the like, and description of the same features as those of the first embodiment is omitted.
The second game system includes the input section 60 (i.e., sensor) that recognizes the movement of the hand or the body of a player P. The input section 60 includes a light-emitting section 610, a depth sensor 620, an RGB camera 630, and a sound input section 640 (multiarray microphone). The input section 60 determines (acquires) the three-dimensional position of the hand or the body of the player P in the real space and shape information without coming in contact with the player P (body). An example of a process performed by the second game system using the input section 60 is described below.
The second game system includes the game machine 50, the input section 60, the display section 90, and a speaker 92.
The input section 60 includes the light-emitting section 610, the depth sensor 620, the RGB camera 630, the sound input section 640, a processing section 650, and a storage section 660.
The light-emitting section 610 applies (emits) light to a body (player or object). For example, the light-emitting section 610 includes a light-emitting element (e.g., LED), and applies light such as infrared radiation to the target body.
The depth sensor 620 includes a light-receiving section that receives reflected light from the body. The depth sensor 620 extracts reflected light from the body irradiated by the light-emitting section 610 by calculating the difference between the quantity of light received when the light-emitting section 610 emits light and the quantity of light received when the light-emitting section 610 does not emit light. Specifically, the depth sensor 620 outputs a reflected light image (i.e., input image) obtained by extracting reflected light from the body irradiated by the light-emitting section 610 to the storage section 660 every predetermined unit time (e.g., 1/60th of a second). The distance (depth value) between the input section 60 and the body can be acquired from the reflected light image in pixel units.
The RGB camera 630 focuses light emitted from the body (player P) on a light-receiving plane of an imaging element using an optical system (e.g., lens), photoelectrically converts the light and shade of the image into the quantity of electric charge, and sequentially reads and converts the electric charge into an electrical signal. The RGB camera 630 then outputs an RGB (color) image (i.e., input image) to the storage section 660. For example, the RGB camera 630 generates an RGB image illustrated in
The depth sensor 620 and the RGB camera 630 may receive light from a common light-receiving section. In this case, two light-receiving sections may be provided. The light-receiving section for the depth sensor 620 may differ from the light-receiving section for the RGB camera 630.
The sound input section 640 performs a voice recognition process, and may be a multiarray microphone, for example.
The processing section 650 instructs the light emission timing of the light-emitting section 610, and transmits the reflected light image output from the depth sensor 620 and the RGB image acquired by the RGB camera 630 to the game machine 50.
The storage section 660 sequentially stores the reflected light image output from the depth sensor 620 and the RGB image output from the RGB camera 630.
The game machine 50 according to this embodiment is described below. The game machine 50 according to this embodiment includes a storage section 570, a processing section 500, an information storage medium 580, and a communication section 596.
The defined input information stored in a determination information storage section 573 of the second game system includes a moving vector (motion vector) defined in advance that is used to determine the moving vector (motion vector) of a feature point of the input image (reflected light image and RGB image) during the determination period.
The processing section 500 performs various processes according to this embodiment based on data read from a program stored in the information storage medium 580. Specifically, the information storage medium 580 stores a program that causes a computer to function as each section according to this embodiment (i.e., a program that causes a computer to perform the process of each section).
The communication section 596 can communicate with another game machine through a network (Internet). The function of the communication section 596 may be implemented by hardware such as a processor, a communication ASIC, or a network interface card, a program, or the like.
A program that causes a computer to function as each section according to this embodiment may be distributed to the information storage medium 580 (or the storage section 570) from a storage section or an information storage medium included in a server through a network. Use of the information storage medium included in the server is also included within the scope of the invention.
The processing section 500 (processor) performs a game process, an image generation process, and a sound control process based on the information received from the input section 60, a program loaded into the storage section 570 from the information storage medium 580, and the like.
The processing section 500 of the second game system functions as an acquisition section 510, a disposition section 511, a movement/motion processing section 512, an object control section 513, a determination section 514, an image generation section 520, and a sound control section 530.
The acquisition section 510 according to the second embodiment acquires input image information (e.g., reflected light image and RGB image) from the input section 60.
The disposition section 511 determines the position of the object in the virtual space based on the positional relationship between the body and the input section 60, the positional relationship being determined based on the input image (at least one of the reflected light image and the RGB image).
A movement processing section of the movement/motion processing section 512 may control the moving speed of the object based on the distance between the input section 60 and the body, the distance being determined based on the input image.
The object control section 513 controls the size of the object in the virtual space based on the distance between the input section 60 and the body, the distance being determined based on the input image. For example, the object control section 513 reduces the scaling factor of the object as the distance between the input section 60 and the object decreases, and increases the scaling factor of the object as the distance between the input section 60 and the object increases.
The object control section 513 may control the degree by which the scaling factor of the object is changed with the lapse of time based on the distance between the input section 60 and the body, the distance being determined based on the input image.
The determination section 514 includes a timing determination section 514A and an input information determination section 514B. The timing determination section 514A determines whether or not the moving vector that indicates the moving amount and the moving direction of a feature point (given area) specified based on the input image coincides with the moving vector corresponding to the start timing of the determination period (reference determination period or auxiliary determination period A) defined in advance.
The input information determination section 514B determines whether or not the moving vector (moving vector group) that has been acquired during the determination period and indicates the moving amount and the moving direction of a feature point (given area) specified based on the input image coincides with the moving vector (defined moving vector group) corresponding to the determination period (reference determination period or auxiliary determination period A) defined in advance.
The timing determination section 514A and the input determination section 514B may adjust the difficulty level based on the distance between the input section 60 and the body, the distance being determined based on the input image, and perform the determination process.
A virtual camera control section 515 controls the position of the virtual camera in the virtual three-dimensional space. The virtual camera control section 515 may control the position of the virtual camera based on the distance between the input section 60 and the body, the distance being determined based on the input image (reflected light image). The virtual camera control section 515 may control the angle of view of the virtual camera based on the distance between the input section 60 and the object specified based on the input image (reflected light image). The virtual camera control section 515 may control the view direction (line-of-sight direction) of the virtual camera based on the positional relationship between the body and the input section 60, the positional relationship being determined based on the reflected light image.
The input section 60 of the second game system includes the depth sensor 620 and the RGB camera 630, and receives input by image processing the body (e.g., the player or the hand of the player) without the need of an input device (e.g., controller). This makes it possible to perform various novel game processes. The depth sensor 620 and the RGB camera 630 of the input section 60 are described below.
The depth sensor 620 according to this embodiment is described below with reference to
The depth sensor 620 receives reflected light of the light emitted from the light-emitting section 610. The depth sensor 620 generates a reflected light image obtained by extracting the spatial intensity distribution of reflected light. For example, the depth sensor 620 extracts reflected light from the body irradiated by the light-emitting section 610 to obtain a reflected light image by calculating the difference between the quantity of light received when the light-emitting section 610 emits light and the quantity of light received when the light-emitting section 610 does not emit light. The value of each pixel of the reflected light image corresponds to the distance (depth value) between a position GP of the input section 60 (depth sensor 620) and the body. The position GP of the input section 60 is synonymous with the position of the depth sensor 620 and the light-receiving position of the depth sensor 60.
In the example illustrated in
In this embodiment, a pixel having a luminance (quantity of received light or pixel value) equal to or larger than a predetermined value is extracted from the reflected light image as a pixel close to the position GP of the input section 60. For example, when the grayscale of the reflected light image is 256, a pixel having a value equal to or larger than a predetermined value (e.g., 200) is extracted as the high-luminance area.
The reflected light image obtained by the depth sensor is correlated with the distance (depth value) between the position GP of the input section 60 and the body. As illustrated in
In this embodiment, the position of the player P in the real space is calculated based on the luminance of the pixel extracted from the reflected light image as the high-luminance area by utilizing the above principle. For example, a pixel of the reflected light image having the highest luminance value is used as a feature point, and the distance between the position GP and the player P is calculated based on the luminance of the feature point. Note that the feature point may be the center pixel of the area of the hand determined based on a shape pattern provided in advance, the moving vector, or the like. When the reflected light image includes a large high-luminance area, it may be determined that the body is positioned near the input section as compared with the case where the high-luminance area is small, for example.
In this embodiment, the position of the body in the real space with respect to the input section 60 may be determined based on the reflected light image. For example, when the feature point is positioned at the center of the reflected light image, it may be determined that the body is positioned along the light-emitting direction of the light source of the input section 60. When the feature point is positioned in the upper area of the reflected light image, it may be determined that the body is positioned higher than the input section 60. When the feature point is positioned in the lower area of the reflected light image, it may be determined that the body is positioned lower than the input section 60. When the feature point is positioned in the left area of the reflected light image, it may be determined that the body is positioned on the right side with respect to the input section 60 (when viewed from the input section (light source)). When the feature point is positioned in the right area of the reflected light image, it may be determined that the body is positioned on the left side with respect to the input section 60 (when viewed from the input section (light source)). In this embodiment, the positional relationship between the body and the input section 60 can thus be determined based on the reflected light image.
In this embodiment, the moving direction of the body in the real space may be determined based on the reflected light image. For example, when the feature point is positioned at the center of the reflected light image, and the luminance of the feature point increases, it may be determined that the body moves in the direction of the light source of the input section 60. When the feature point moves from the upper area to the lower area of the reflected light image, it may be determined that the body moves downward relative to the input section 60. When the feature point moves from the left area to the right area of the reflected light image, it may be determined that the body moves leftward relative to the input section 60. Specifically, the moving direction of the body relative to the input section 60 may be determined based on the reflected light image.
Note that the reflected light from the body decreases to a large extent as the distance between the body and the position GP of the input section 60 increases. For example, the quantity of received light per pixel of the reflected light image decreases in inverse proportion to the second power of the distance between the body and the position GP of the input section 60. Therefore, when the player P is positioned at a distance of about 20 m from the input section 60, the quantity of received light from the player P decreases to a large extent so that a high-luminance area that specifies the player P cannot be extracted. In this case, it may be determined that there is no input. When a high-luminance area cannot be extracted, alarm sound may be output from the speaker.
In this embodiment, an RGB image is acquired by the RGB camera (imaging section) 630 as the input information. Since the RGB image corresponds to the reflected light image, the extraction accuracy of the moving vector (motion vector) of the body and the shape area can be improved.
In this embodiment, a digitized RGB image is acquired from the RGB camera based on the drawing frame rate (e.g., 60 frames per second (fps)), for example. The moving vector (motion vector) that indicates the moving amount and the moving direction of the feature point between two images that form a video image captured by the RGB camera 630 is calculated. The feature point of the image refers to one or more pixels that can be determined by corner detection or edge extraction. The moving vector is a vector that indicates the moving direction and the moving amount of the feature point (may be an area including the feature point) in the current image (i.e., optical flow). The optical flow may be determined by a gradient method or a block matching method, for example. In this embodiment, the contour of the player P and the contour of the hand of the player P are detected from the captured image by edge extraction, and the moving vector of the pixel of the detected contour is calculated, for example.
In this embodiment, it is determined that the player P has performed an input operation when the moving amount of the feature point is equal to or larger than a predetermined moving amount. The moving vector of the feature point is matched with the defined moving vector provided in advance to extract the area of the hand of the player P. In this embodiment, the body may be extracted based on the RGB color value of each pixel of the RGB image acquired by the RGB camera 630.
According to this embodiment, the distance (depth value) between the input section 60 and the body can be determined by the depth sensor 620, and the position coordinates (X, Y) and the moving vector of the feature point (high-luminance area) in a two-dimensional plane (reflected light image or RGB image) can be extracted. Therefore, the position Q of the object in the real space based on the input section 60 can be determined based on the distance (Z) between the input section 60 and the body, and the position coordinates (X, Y) in the reflected light image and the RGB image.
In this embodiment, a display image displayed on the display section is generated based on the input image (reflected light image or RGB image) obtained by the input section 60. The details thereof are described below.
In this embodiment, the size of the object disposed in the virtual space is controlled based on the distance L between the position GP of the input section 60 and the body calculated based on the reflected light image.
As illustrated in
As illustrated in
Specifically, the scaling factor of the object is controlled based on a change in the distance L between the position GP of the input section 60 and the body. For example, the scaling factor of the object is reduced as the distance L decreases, and the scaling factor of the character C is increased as the distance L increases.
In this embodiment, since the reflected light image is acquired at predetermined intervals (e.g., the drawing frame rate (60 fps)), the distance L between the position GP of the input section 60 and the body can be calculated in real time. Therefore, the scaling factor of the object may be controlled in real time based on a change in the distance L.
In this embodiment, the object modeled in advance at a scaling factor of 1 is stored in the storage section 570. A control target (scaling target) object and a non-control target (non-scaling target) object are distinguishably stored in the storage section 570.
Specifically, a control flag “1” is stored corresponding to the ID of each control target object (i.e., character C, instruction object OB1, advance instruction object OB2, and moving timing marks A1 and A2), and a control flag “0” is stored corresponding to the ID of each non-control target object (e.g., scores S1 and S2).
The scaling factor of the object for which the control flag “1” is set is calculated based on the distance L, and the object is scaled up/down based on the calculated scaling factor. This makes it possible to scale up/down the object that provides information necessary for the player. In this embodiment, the instruction object for input evaluation is set to the control target object.
According to this embodiment, since the size of the object is controlled based on the distance L between the position GP of the input section 60 and the body, it is possible to generate a display image including an object having an appropriate size for the player P. For example, since the object and the character are scaled up when the player P has moved away from the input section 60, the player P can easily determine the instructions required for input determination. Since the instruction object OB1 and the character C are scaled down when the player P has approached the input section 60, the player P can easily determine the instructions by observing the object having an appropriate size.
In this embodiment, the size of the object may be controlled based on the input determination results (timing determination results or input information determination results). Specifically, the size of the object may be controlled based on the distance L between the position GP of the input section 60 and the body, and the input determination results.
For example, the scaling factor of the object may be controlled (e.g., 2) based on the distance L when the input start timing coincides with the start timing (reference start timing or auxiliary start timing) of the determination period, and the scaling factor of the object calculated based on the distance L is increased (e.g., 3) when the input start timing does not coincide with the start timing of the determination period. This allows an inexperienced player to easily observe the object.
The scaling factor of the object may be controlled (e.g., 2) based on the distance L when the input information that has been input during the determination period (reference determination period or auxiliary determination period) coincides with the defined input information, and the scaling factor of the object calculated based on the distance L is increased (e.g., 3) when the input information does not coincide with the defined input information. This allows the player to easily observe the object, so that the possibility that the input information is determined to coincide with the defined input information during the determination period can be increased.
The scaling factor of the object may be controlled based on the distance L when the score S1 of the player is equal to or higher than a predetermined score value, the scaling factor of the object calculated based on the distance L is increased when the score S1 of the player is lower than a predetermined score value. This allows the player to easily obtain a high score (i.e., the object can be controlled with a size appropriate for the level of the player).
In this embodiment, the instruction object OB1 is scaled up with the lapse of time during the advance period or the reference determination period, as illustrated in
In this embodiment, the degree by which the scaling factor of the instruction object is changed with the lapse of time during the advance period or the reference determination period is controlled based on the distance between the body and the input section 60, the distance being determined based on the reflected light image.
As illustrated in
As illustrated in
In this embodiment, the position and the angle of view of the virtual camera may be controlled based on the distance L between the position GP of the input section 60 and the body and the position Q of the body calculated based on the reflected light image.
According to this embodiment, the distance L can be calculated in real time at predetermined intervals. Therefore, the position and the angle of view of the virtual camera may be controlled in real time based on the distance L.
In this embodiment, the viewpoint position of the virtual camera VC is controlled as described below. For example, when the distance between the position GP of the input section 60 and the body (player P) is L1 (L1≦LD) (see
When the distance between the position GP of the input section 60 and the body is L2 (L1≦LD<L2) (see
For example, when the character C is disposed at a constant position within the field-of-view range of the virtual camera VC disposed at the position DP1, the character C is scaled up in the generated display image by moving the virtual camera VC from the position DP1 to the position DP2. Specifically, the character C is scaled up by a perspective projection transformation process, so that a display image including an object having an appropriate size for the player P can be generated.
In this embodiment, the angle of view of the virtual camera VC is controlled as described below. For example, when the distance between the position GP of the input section 60 and the body (player P) is L1 (L1≦LD) (see
When the distance between the position GP of the input section 60 and the body is L2 (L1≦LD<L2), the angle of view of the virtual camera VC is reduced to theta2 as compared with the case where the distance L is L1 (see
In the second embodiment, the input determination process is performed by determining the input timing and the input information (moving vector (motion vector) and moving path) based on the reflected light image and the RGB image.
For example, it is determined that the player has performed an input operation when the moving amount of the moving vector between images of a video image (reflected light image and RGB image) is equal to or larger than a predetermined amount, and the moving direction coincides with the defined moving vector.
Whether or not the input start timing IS coincides with the start timing (e.g., reference start timing BS) of the determination period is determined by determining whether or not the moving vector that indicates the moving amount and the moving direction of the feature point (given area) specified based on the reflected light image and the RGB image coincides with the moving vector corresponding to the start timing of the determination period (reference determination period or auxiliary determination period A) defined in advance.
Whether or not the input information that has been input during the determination period coincides with the defined input information MD is determined by determining whether or not the moving vector (moving vector group when extracting the feature point (given area) between three or more input images) that has been acquired during the determination period (reference determination period or auxiliary determination period) and indicates the moving amount and the moving direction of the feature point (given area) specified based on the input image (reflected light image and RGB image) coincides with the defined moving vector (defined moving vector group when defining the movement of the feature point between three or more images) of the feature point between images during the determination period (reference determination period or auxiliary determination period) defined in advance.
In the second embodiment, a plurality of auxiliary start timings PS1, PS2, and PS3 corresponding to the reference start timing BS are also defined, as illustrated in
In the second embodiment, the difficulty level of the input timing determination process may be adjusted based on the distance L between the position GP of the input section 60 and the body. For example, when the distance between the position GP of the input section 60 and the body is L1 (L1≦LD) (see
In the second embodiment, the difficulty level of the input information determination process may be adjusted based on the distance between the position GP of the input section 60 and the body. For example, when the distance between the position GP of the input section 60 and the body is L1 (L1≦LD) (see
Specifically, it becomes difficult for the player P to observe the instruction object as the player P moves away from the input section 60, and the accuracy of the feature point extracted based on the reflected light image and the RGB image deteriorates. In the second embodiment, since it is disadvantageous for the player to move away from the input section 60, the difficulty level of the input determination process may be adjusted.
According to this embodiment, the distance L can be acquired in real time at predetermined intervals (e.g., drawing frame rate (60 fps)). Therefore, the difficulty level of the input information determination process may be adjusted in real time based on the distance L.
The flow of the process according to the second embodiment is described below with reference to
In this embodiment, the positional relationship between the body and the input section 60 can be determined based on the reflected light image. A first application example illustrates an example of a process based on the positional relationship between the body and the input section.
(1) Position of Object
In this embodiment, the position of the object disposed in the virtual space may be determined based on the positional relationship between the body and the input section 60, the positional relationship being determined based on the reflected light image. As illustrated in
As illustrated in
Specifically, since the position of the object disposed in the virtual space can be determined based on the positional relationship between the body and the input section 60, the positional relationship being determined based on the input image, it is possible to provide a display image in which the object is disposed at a position at which the object can be easily observed by the player. Note that the position of the object may be determined in real time.
(2) Moving Direction
In this embodiment, the moving direction of the object in the virtual space may be controlled based on the positional relationship between the body and the input section 60, the positional relationship being determined based on the reflected light image. In the example illustrated in
In the example illustrated in
Specifically, since the moving direction of the object in the virtual space can be determined based on the positional relationship between the body and the input section 60, the positional relationship being determined based on the input image, it is possible to provide a display image in which the object is disposed at a position at which the object can be easily observed by the player. Note that the position of the object may be determined in real time.
(3) View Direction
In this embodiment, the view direction of the virtual camera in the virtual space may be controlled based on the positional relationship between the body and the input section 60, the positional relationship being determined based on the reflected light image.
As illustrated in
This embodiment may be applied to a music game that determines the input timing in synchronization with reproduction of music data. For example, this embodiment may be applied to a game system that allows the player to give a performance to the rhythm indicated by the music data by virtually striking a percussion instrument (e.g., drum) at the reference timing indicated by the music data.
The size of an area I including a determination reference object OB4 and the instruction marks OB5 and OB6 may be controlled based on the distance L between the body and the input section 60, the distance being determined based on the input image. For example, the scaling factor of the area I may be increased as the distance L increases, and may be reduced as the distance L decreases.
The moving speed v of the instruction marks OB5 and OB6 may also be controlled based on the distance L between the body and the input section 60, the distance being determined based on the input image.
For example, when the distance between the position GP of the input section 60 and the body is L1 (L1≦LD) (see
In this embodiment, the moving direction of the instruction mark may be controlled based on the positional relationship between the body and the input section 60, the positional relationship being determined based on the reflected light image.
In the example illustrated in
In the example illustrated in
Specifically, since the moving direction of the object in the virtual space can be determined based on the positional relationship between the body and the input section 60, the positional relationship being determined based on the input image, it is possible to provide a display image in which the instruction marks OB5 and OB6 are disposed at positions at which the instruction marks OB5 and OB6 can be easily observed by the player.
The second game system according to this embodiment determines the motion (movement) of the player as follows. As illustrated in
As illustrated in
Note that the process may be performed in human part units (e.g., arm bone and leg bone). In this case, a plurality of bones may be defined in advance in part units, and a bone that agrees well with the extracted silhouette may be determined in part units.
Although only some embodiments of the invention have been described in detail above, those skilled in the art would readily appreciate that many modifications are possible in the embodiments without materially departing from the novel teachings and advantages of the invention. Accordingly, such modifications are intended to be included within the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2010-016083 | Jan 2010 | JP | national |