This application claims priority to Japanese Patent Application No. 2023-017466 filed on Feb. 8, 2023, the entire contents of which are incorporated herein by reference.
The present disclosure relates to information processing in which a player object is moved in a virtual space on the basis of an operation performed by a user, an image of the virtual space is taken by a virtual camera, and the virtual space is rendered.
Hitherto, a game that progresses with a virtual three-dimensional space displayed on a first-person or third-person viewpoint, has been known. Also, a game that is such a game and for which a touch panel is used for operations, has been known. For example, a game that is played on a mobile terminal and in which the imaging direction of a virtual camera is adjustable by a touch operation, has also been known. In addition, a game that is such a game and in which a map image indicating the entirety or a part of a game world is displayed, has also been known. For example, a game in which a “mini-map” is displayed so as to be superimposed on the upper left or upper right of a 3D screen, has also been known.
In a game in which the imaging direction of a virtual camera is adjustable by an operation performed by a user, a user who has not gotten used to the game cannot have sufficient spatial perception for a virtual three-dimensional space, and when such a user tries to change the direction of the virtual camera by a touch operation, in some cases, it is not easy for the user to change the imaging direction to the desired direction, while grasping which direction a player character is facing in the virtual three-dimensional space. For example, it is conceivable to use a mini-map to present information on the field of view of the virtual camera, but it is still not easy to perform operations while viewing both the mini-map and the virtual three-dimensional space.
Therefore, the present application discloses a non-transitory computer-readable storage medium having an information processing program stored therein, an information processing system, and an information processing method that, when a user desires to change the imaging direction of a virtual camera on a screen on which a virtual space is displayed as a three-dimensional image, make it easier for the user to recognize how the imaging direction has been changed while enabling an accurate and easy imaging direction instruction operation.
For example, the following configuration examples are exemplified.
Configuration 1 is directed to one or more non-transitory computer-readable storage media having stored therein an information processing program causing one or more processors to execute processing of moving a player object in a virtual space on the basis of an operation performed by a user, taking an image of the virtual space by a virtual camera, and rendering the virtual space, the information processing program causing the one or more processors to execute processing of:
According to the above configuration, in the display in which the virtual space is viewed from above, the direction desired to be faced in the virtual space can be designated, and the direction of the virtual camera is changed over a predetermined time after switching from the second display to the first display. Accordingly, it is made possible for the user to recognize how the direction of the virtual camera has been changed, so that the user's sense of direction is not impaired.
In Configuration 2 based on Configuration 1 above, the second display may be display of a planar image representing terrain of the virtual space when the virtual space is viewed from above.
According to the above configuration, since the virtual space is displayed as a planar image, the terrain and the positional relationship in the virtual space can be presented in a form that is easy for the user to grasp. It is also possible to designate a direction to a point that is not visible in the first display, as the second camera direction.
In Configuration 3 based on Configuration 1 above, the second camera direction may be determined by the user designating a point in the virtual space and performing a first operation during the second display.
According to the above configuration, for example, the player can designate, as the point, a point to which the player desires to move. The second camera direction can be determined on the basis of this point.
In Configuration 4 based on Configuration 3 above, the second camera direction may be a direction from a position of the player object toward the point in the virtual space designated by the user.
According to the above configuration, the first display can be presented in a manner that makes it easy to understand which direction the point designated by the user is located from the position of the player character in the first display.
In Configuration 5 based on Configuration 3 above, a second operation may be selectively accepted with the first operation during the second display, and if a point in the virtual space is designated and the second operation is performed, the player object may be moved to the designated point after switching from the second display to the first display.
According to the above configuration, by merely designating a point and performing the second operation during the second display, the player character can be automatically moved to the point after returning to the first display. Accordingly, the convenience of the player can be improved. In addition, during the second display, if the above first operation is performed, only the direction of the virtual camera can be changed. Therefore, the user can select control to be performed when returning from the second display to the first display, by selectively using the first operation and the second operation.
In Configuration 6 based on Configuration 4 above, during the second display, if the point in the virtual space designated by the user is a point that cannot be reached by the player object, acceptance of the first operation may be restricted.
According to the above configuration, during the second display, if the user designates a point to which the player object cannot move, control in which the player object is automatically moved can be prevented from being selected. Accordingly, a wasteful action of the player object trying to move toward a location to which the player object cannot reach can be prevented from being performed when switching from the second display to the first display.
In Configuration 7 based on Configuration 1 above, a part of the virtual space may be displayed in the second display, and a display range of the virtual space may be changed according to an operation performed by the user.
According to the above configuration, a direction to a point that is not visible in the first display can be designated.
In Configuration 8 based on any one of Configurations 1 to 7 above, a direction image indicating an imaging direction of the virtual camera may be displayed in the second display, and a display position of the direction image may be changed such that a direction indicated by the direction image is changed according to an operation performed by the user.
According to the above configuration, during the second display, the direction of the virtual camera can be changed directly by an operation performed by the user, without designating a point.
In Configuration 9 based on any one of Configurations 1 to 8 above, the player object may be set as a gazing point of the virtual camera in the first display, and the virtual camera may be rotated from the first camera direction to the direction based on the second camera direction such that only an angle of a yaw component thereof, which is a component corresponding to an imaging direction of the virtual camera, is changed and a pitch component and a roll component thereof are not changed.
In Configuration 10 based on any one of Configurations 1 to 9 above, the first camera direction may be a predetermined direction that is not a vertically downward direction in the virtual space.
According to the above configuration, a three-dimensional image having a sense of perspective and a sense of depth is represented as the first display to the user. Then, the direction of the virtual camera is changed over a predetermined period of time when switching from the second display to the first display, thereby allowing the user to grasp the change in direction without impairing the user's sense of direction in the virtual three-dimensional space.
According to the present disclosure, a predetermined direction can be designated by using the second display showing the virtual space as viewed from above, and the direction of the virtual camera is changed over a predetermined period of time when switching to the first display, thereby making it easier for the user to grasp the change in the direction of the virtual camera.
Hereinafter, one exemplary embodiment will be described.
A game system according to an example of the exemplary embodiment will be described below. An example of a game system 1 according to the exemplary embodiment includes a main body apparatus (an information processing apparatus, which functions as a game apparatus main body in the exemplary embodiment) 2, a left controller 3, and a right controller 4. Each of the left controller 3 and the right controller 4 is attachable to and detachable from the main body apparatus 2. That is, the game system 1 can be used as a unified apparatus obtained by attaching each of the left controller 3 and the right controller 4 to the main body apparatus 2. Further, in the game system 1, the main body apparatus 2, the left controller 3, and the right controller 4 can also be used as separate bodies (see
The shape and the size of the housing 11 are discretionary. As an example, the housing 11 may be of a portable size. Further, the main body apparatus 2 alone or the unified apparatus obtained by attaching the left controller 3 and the right controller 4 to the main body apparatus 2 may function as a mobile apparatus. The main body apparatus 2 or the unified apparatus may function as a handheld apparatus or a portable apparatus.
As shown in
The main body apparatus 2 includes a touch panel 13 on the screen of the display 12. In the exemplary embodiment, the touch panel 13 is of a type capable of receiving a multi-touch input (e.g., electrical capacitance type). However, the touch panel 13 may be of any type, and may be, for example, of a type capable of receiving a single-touch input (e.g., resistive film type).
The main body apparatus 2 includes speakers (i.e., speakers 88 shown in
Further, the main body apparatus 2 includes a left terminal 17, which is a terminal for the main body apparatus 2 to perform wired communication with the left controller 3, and a right terminal 21, which is a terminal for the main body apparatus 2 to perform wired communication with the right controller 4.
As shown in
The main body apparatus 2 includes a lower terminal 27. The lower terminal 27 is a terminal for the main body apparatus 2 to communicate with a cradle. In the exemplary embodiment, the lower terminal 27 is a USB connector (more specifically, a female connector). Further, when the unified apparatus or the main body apparatus 2 alone is mounted on the cradle, the game system 1 can display on a stationary monitor an image generated by and outputted from the main body apparatus 2. Further, in the exemplary embodiment, the cradle has the function of charging the unified apparatus or the main body apparatus 2 alone mounted on the cradle. Further, the cradle has the function of a hub device (specifically, a USB hub).
The left controller 3 includes a left analog stick (hereinafter, referred to as a “left stick”) 32 as an example of a direction input device. As shown in
The left controller 3 includes various operation buttons. The left controller 3 includes four operation buttons 33 to 36 (specifically, a right direction button 33, a down direction button 34, an up direction button 35, and a left direction button 36) on the main surface of the housing 31. Further, the left controller 3 includes a record button 37 and a “−” (minus) button 47. The left controller 3 includes a first L-button 38 and a ZL-button 39 in an upper left portion of a side surface of the housing 31. Further, the left controller 3 includes a second L-button 43 and a second R-button 44, on the side surface of the housing 31 on which the left controller 3 is attached to the main body apparatus 2. These operation buttons are used to give instructions depending on various programs (e.g., an OS program and an application program) executed by the main body apparatus 2.
Further, the left controller 3 includes a terminal 42 for the left controller 3 to perform wired communication with the main body apparatus 2.
Similarly to the left controller 3, the right controller 4 includes a right analog stick (hereinafter, referred to as a “right stick”) 52 as a direction input section. In the exemplary embodiment, the right stick 52 has the same configuration as that of the left stick 32 of the left controller 3. Further, the right controller 4 may include a directional pad, a slide stick that allows a slide input, or the like, instead of the analog stick. Further, similarly to the left controller 3, the right controller 4 includes four operation buttons 53 to 56 (specifically, an A-button 53, a B-button 54, an X-button 55, and a Y-button 56) on a main surface of the housing 51. Further, the right controller 4 includes a “+” (plus) button 57 and a home button 58. Further, the right controller 4 includes a first R-button 60 and a ZR-button 61 in an upper right portion of a side surface of the housing 51. Further, similarly to the left controller 3, the right controller 4 includes a second L-button 65 and a second R-button 66.
Further, the right controller 4 includes a terminal 64 for the right controller 4 to perform wired communication with the main body apparatus 2.
The main body apparatus 2 includes a processor 81. The processor 81 is an information processing section for executing various types of information processing to be executed by the main body apparatus 2. For example, the processor 81 may be composed only of a CPU (Central Processing Unit), or may be composed of a SoC (System-on-a-chip) having a plurality of functions such as a CPU function and a GPU (Graphics Processing Unit) function. The processor 81 executes an information processing program (e.g., a game program) stored in a storage section (specifically, an internal storage medium such as a flash memory 84, an external storage medium attached to the slot 23, or the like), thereby performing the various types of information processing.
The main body apparatus 2 includes the flash memory 84 and a DRAM (Dynamic Random Access Memory) 85 as examples of internal storage media built into the main body apparatus 2. The flash memory 84 and the DRAM 85 are connected to the processor 81. The flash memory 84 is a memory mainly used to store various kinds of data (or programs) to be saved in the main body apparatus 2. The DRAM 85 is a memory used to temporarily store various kinds of data used for information processing.
The main body apparatus 2 includes a slot interface (hereinafter, abbreviated as “I/F”) 91. The slot I/F 91 is connected to the processor 81. The slot I/F 91 is connected to the slot 23, and in accordance with an instruction from the processor 81, reads and writes data from and to the predetermined type of storage medium (e.g., a dedicated memory card) attached to the slot 23.
The processor 81 appropriately reads and writes data from and to the flash memory 84, the DRAM 85, and each of the above storage media, thereby performing the above information processing.
The main body apparatus 2 includes a network communication section 82. The network communication section 82 is connected to the processor 81. The network communication section 82 communicates (specifically, through wireless communication) with an external apparatus via a network. In the exemplary embodiment, as a first communication form, the network communication section 82 connects to a wireless LAN and communicates with an external apparatus, using a method compliant with the Wi-Fi standard. Further, as a second communication form, the network communication section 82 wirelessly communicates with another main body apparatus 2 of the same type, using a predetermined method for communication (e.g., communication based on a unique protocol or infrared light communication). The wireless communication in the above second communication form achieves the function of enabling so-called “local communication” in which the main body apparatus 2 can wirelessly communicate with another main body apparatus 2 placed in a closed local network area, and the plurality of main body apparatuses 2 directly communicate with each other to transmit and receive data.
The main body apparatus 2 includes a controller communication section 83. The controller communication section 83 is connected to the processor 81. The controller communication section 83 wirelessly communicates with the left controller 3 and/or the right controller 4. The communication method between the main body apparatus 2, and the left controller 3 and the right controller 4, is discretionary. In the exemplary embodiment, the controller communication section 83 performs communication compliant with the Bluetooth (registered trademark) standard with the left controller 3 and with the right controller 4.
The processor 81 is connected to the left terminal 17, the right terminal 21, and the lower terminal 27. When performing wired communication with the left controller 3, the processor 81 transmits data to the left controller 3 via the left terminal 17 and also receives operation data from the left controller 3 via the left terminal 17. Further, when performing wired communication with the right controller 4, the processor 81 transmits data to the right controller 4 via the right terminal 21 and also receives operation data from the right controller 4 via the right terminal 21. Further, when communicating with the cradle, the processor 81 transmits data to the cradle via the lower terminal 27. As described above, in the exemplary embodiment, the main body apparatus 2 can perform both wired communication and wireless communication with each of the left controller 3 and the right controller 4. Further, when the unified apparatus obtained by attaching the left controller 3 and the right controller 4 to the main body apparatus 2 or the main body apparatus 2 alone is attached to the cradle, the main body apparatus 2 can output data (e.g., image data or sound data) to the stationary monitor or the like via the cradle.
Here, the main body apparatus 2 can communicate with a plurality of left controllers 3 simultaneously (in other words, in parallel). Further, the main body apparatus 2 can communicate with a plurality of right controllers 4 simultaneously (in other words, in parallel). Thus, a plurality of users can simultaneously provide inputs to the main body apparatus 2, each using a set of the left controller 3 and the right controller 4. As an example, a first user can provide an input to the main body apparatus 2 using a first set of the left controller 3 and the right controller 4, and simultaneously, a second user can provide an input to the main body apparatus 2 using a second set of the left controller 3 and the right controller 4.
The main body apparatus 2 includes a touch panel controller 86, which is a circuit for controlling the touch panel 13. The touch panel controller 86 is connected between the touch panel 13 and the processor 81. On the basis of a signal from the touch panel 13, the touch panel controller 86 generates data indicating the position at which a touch input has been performed, for example, and outputs the data to the processor 81.
Further, the display 12 is connected to the processor 81. The processor 81 displays a generated image (e.g., an image generated by executing the above information processing) and/or an externally acquired image on the display 12.
The main body apparatus 2 includes a codec circuit 87 and speakers (specifically, a left speaker and a right speaker) 88. The codec circuit 87 is connected to the speakers 88 and a sound input/output terminal 25 and also connected to the processor 81. The codec circuit 87 is a circuit for controlling the input and output of sound data to and from the speakers 88 and the sound input/output terminal 25.
The main body apparatus 2 includes a power control section 97 and a battery 98. The power control section 97 is connected to the battery 98 and the processor 81. Further, although not shown in
Further, the battery 98 is connected to the lower terminal 27. When an external charging device (e.g., the cradle) is connected to the lower terminal 27 and power is supplied to the main body apparatus 2 via the lower terminal 27, the battery 98 is charged with the supplied power.
The left controller 3 includes a communication control section 101, which communicates with the main body apparatus 2. As shown in
Further, the left controller 3 includes a memory 102 such as a flash memory. The communication control section 101 includes, for example, a microcomputer (or a microprocessor) and executes firmware stored in the memory 102, thereby performing various processes.
The left controller 3 includes buttons 103 (specifically, the buttons 33 to 39, 43, 44, and 47). Further, the left controller 3 includes the left stick 32. Each of the buttons 103 and the left stick 32 outputs information regarding an operation performed on itself to the communication control section 101 repeatedly at appropriate timings.
The left controller 3 includes inertial sensors. Specifically, the left controller 3 includes an acceleration sensor 104. Further, the left controller 3 includes an angular velocity sensor 105. In the exemplary embodiment, the acceleration sensor 104 detects the magnitudes of accelerations along predetermined three axial (e.g., x, y, z axes shown in
The communication control section 101 acquires information regarding an input (specifically, information regarding an operation or the detection result of the sensor) from each of input sections (specifically, the buttons 103, the left stick 32, and the sensors 104 and 105). The communication control section 101 transmits operation data including the acquired information (or information obtained by performing predetermined processing on the acquired information) to the main body apparatus 2. The operation data is transmitted repeatedly, once every predetermined time. The interval at which the information regarding an input is transmitted from each of the input sections to the main body apparatus 2 may or may not be the same.
The above operation data is transmitted to the main body apparatus 2, whereby the main body apparatus 2 can obtain inputs provided to the left controller 3. That is, the main body apparatus 2 can determine operations on the buttons 103 and the left stick 32 on the basis of the operation data. Further, the main body apparatus 2 can calculate information regarding the motion and/or the orientation of the left controller 3 on the basis of the operation data (specifically, the detection results of the acceleration sensor 104 and the angular velocity sensor 105).
The left controller 3 includes a power supply section 108. In the exemplary embodiment, the power supply section 108 includes a battery and a power control circuit. Although not shown in
As shown in
The right controller 4 includes input sections similar to the input sections of the left controller 3. Specifically, the right controller 4 includes buttons 113, the right stick 52, and inertial sensors (an acceleration sensor 114 and an angular velocity sensor 115). These input sections have functions similar to those of the input sections of the left controller 3 and operate similarly to the input sections of the left controller 3.
The right controller 4 includes a power supply section 118. The power supply section 118 has a function similar to that of the power supply section 108 of the left controller 3 and operates similarly to the power supply section 108.
Next, an outline of operation of the information processing executed by the game system 1 according to the exemplary embodiment will be described. In the exemplary embodiment, as an example of the information processing, a description will be given assuming processing of a game that is played by a user operating a player character object (hereinafter referred to as player character) that exists in a virtual three-dimensional space (hereinafter referred to simply as virtual space). More specifically, the processing according to the exemplary embodiment is mainly related to control of a virtual camera in the virtual space.
If the player character is not moved and only the imaging direction of the virtual camera (hereinafter referred to as direction of the virtual camera) is changed, the virtual camera moves around the player character while changing the direction of the virtual camera.
In the exemplary embodiment, the default position of the virtual camera is a position behind the player character, but in some cases, the virtual camera may be moved so as to follow the player character while being located at a position that is not behind the player character. For example, while the direction of the virtual camera is maintained as a direction to the right side of the player character, the virtual camera may be moved as the player character moves. In this case, the situation may be a situation in which, in a state where the left side of the player character is shown, the virtual camera is moving in parallel with the player character.
In the game according to the exemplary embodiment, in the game screen shown in
Next, the above map screen will be described.
The map image used on the map screen may be an image obtained by taking an image of the virtual space by the virtual camera from above, or may be an image obtained by taking an image of the virtual space from above through perspective projection. In addition, an abstracted image of the virtual space as viewed from above (e.g., an illustrated image) separately prepared for the map screen may be used. In the exemplary embodiment, the abstracted image of the virtual space as viewed from above separately prepared for the map screen is used.
Next, operations that can be performed by the user in the map screen will be described. First, in the exemplary embodiment, the user can move the cursor on the map image by operating the left stick 32 in the above map screen. For example, in
By performing a predetermined operation, the user can end the display of the map screen and switch the screen to the normal screen. In the exemplary embodiment, as the operation for ending the display of the map screen, multiple operation methods can be used. Specifically, switching from the map screen to the normal screen can be performed by an operation of pressing the A-button 53 (hereinafter referred to as automatic movement instruction operation), an operation of pressing the ZL-button 39 (hereinafter referred to as automatic rotation instruction operation), or an operation of pressing the B-button 54 (hereinafter referred to as simple end operation). Hereinafter, each of these operations will be described.
First, the automatic movement instruction operation using the A-button 53 will be described. If the user presses the A-button 53 in the map screen, the screen is switched from the map screen to the normal screen, and “automatic movement control” can be started. The automatic movement control is control in which, after switching from the map screen to the normal screen, the player character is automatically moved toward a point (hereinafter referred to as cursor point), in the virtual space, corresponding to the position of the cursor on the map screen. More specifically, if the A-button 53 is pressed, a path from the current position of the player character in the virtual space to the cursor point when the A-button 53 is pressed is calculated. This path is calculated as a shortest path reaching the cursor point while avoiding obstacles. For example, using the positional relationship shown in
In the virtual space, there are various terrains and differences in height therebetween, etc., so that the cursor may be at a point, on the map screen, that cannot be reached by the player character in the virtual space. If the A-button 53 is pressed in a state where the cursor is indicating such a reach point, a message indicating that the automatic movement control cannot be performed is displayed, and control in which the map screen is not terminated is performed.
Next, the action made if the automatic rotation instruction operation is performed by pressing the ZL-button 39 in the map screen will be described. In the exemplary embodiment, if the automatic rotation instruction operation is performed, after returning to the normal screen, control in which the direction of the virtual camera is changed over a predetermined time (e.g., 1 to 2 seconds) such that the virtual camera faces the cursor point (hereinafter referred to as automatic rotation control) is performed. That is, after switching to the normal screen, the above automatic movement control is not performed, but control in which only the direction of the virtual camera is turned toward the cursor point is performed. In other words, the user is informed of which direction the cursor point is located in, and whether or not to move is left to the user's subsequent decision.
Also, supplementary description will be given regarding the above change of the direction of the virtual camera. In the exemplary embodiment, where the direction of the virtual camera is taken as three axis components with the vertical direction of the virtual space as a y-axis, the depth direction (imaging direction) as a z-axis, and an axis orthogonal to these two axes as an x-axis, the direction of the virtual camera is changed by only rotating the virtual camera around the y-axis. In other words, the yaw component (z-axis component) of the virtual camera is changed to an angle at which the virtual camera faces the direction to the cursor point, and the pitch component and the roll component of the virtual camera are not changed. In addition, since the gazing point is the player character as described above, the position of the virtual camera changes with the change of the direction of the virtual camera (see
Here, in the exemplary embodiment, if the map screen is terminated by the above automatic rotation instruction operation and the screen is switched to the normal screen, control in which the virtual camera is not instantly turned toward the direction to the cursor point but is turned toward the direction to the cursor point over a certain time as described above, is performed. This is for the following reason. It is assumed that the virtual camera is already facing the cursor point when the map screen is terminated and the screen is switched to the normal screen. In this case, it may be difficult for the user to recognize from which direction and to which direction the direction of the virtual camera is changed before and after the map screen is displayed. In addition, it may also be difficult for the user to know which direction the virtual camera is before the map screen is displayed. Therefore, in some cases, this may impair the user's sense of direction in the normal screen on which the virtual space is displayed as a three-dimensional image, leading to a situation where the user gets lost in the virtual space. In particular, unlike a location where a landmark that can serve as a guide can be seen in the virtual space, at a location where there is no object serving as such a landmark and which is surrounded by similar scenery, it may be difficult for the user to grasp the sense of direction in the normal screen. Therefore, in the exemplary embodiment, when the screen is switched from the map screen to the normal screen, the direction and the position of the virtual camera are controlled to be changed over a short time from the direction in which the virtual camera faces before shifting to the map screen to the direction to the cursor point. Accordingly, when the screen is switched from the map screen to the normal screen, the user can be informed of the direction to the cursor point such that the user's sense of direction is not impaired.
In the exemplary embodiment, when the screen is switched from the map screen to the normal screen by the above automatic rotation instruction operation, the direction of the player character is controlled such that the player character faces the cursor point. By also changing the direction of the player character as described above, the change in direction can be more clearly conveyed. In addition, an operation for changing the direction of the player character by the user themselves can be omitted, so that the convenience of the user can be improved. However, in another exemplary embodiment, when the screen is switched to the normal screen by the automatic rotation instruction operation, the direction of the player character may not necessarily be changed. In this case, for example, the screen can change from a screen on which the back of the player character is shown before entering the map screen, to a screen on which the side or front of the player character is shown. This case is advantageous in that it is easier to inform the user of the difference in direction before and after the map screen is displayed. Even in this case, since the direction of the virtual camera is changed over a certain time, the difference between the direction of the player character and the direction of the virtual camera can be visually conveyed, making it easier for the user to grasp the positional relationship in the virtual space.
Although not shown, operation guides for the above two operations, the automatic movement instruction operation and the automatic rotation instruction operation, may be displayed on the map screen. In this case, the operation guide for the automatic movement instruction operation may not necessarily be displayed while the cursor is indicating a point for which the automatic movement control cannot be performed.
Next, the simple end operation for terminating the map screen by pressing the B-button 54 will be described. In this case, regardless of the position of the cursor on the map screen, the normal screen is displayed without changing the position and the direction of the virtual camera from the state before the map screen is displayed. In other words, the simple end operation is an operation for merely closing the map screen.
Next, the game processing in the exemplary embodiment will described in more detail with reference to
First, various kinds of data to be used in the processing will be described.
The game program 301 is a program for executing game processing including the virtual camera control processing in the exemplary embodiment.
The player character data 302 is data regarding the above player character. The player character data 302 includes position data indicating the position of the player character, data indicating the orientation of the player character, etc.
The terrain object data 303 is data of various terrain objects to be placed in the virtual space. The terrain object data 303 includes an ID for uniquely identifying each terrain object, information indicating the placement position of the terrain object, model data indicating the shape and the appearance of the terrain object, image data, etc.
The virtual camera data 304 is data for controlling the virtual camera. The virtual camera data 304 includes data indicating the position, orientation (depression angle), angle of view, movement speed, etc., of the virtual camera.
The operation data 305 is data obtained from the controller operated by the user. That is, the operation data 305 is data indicating the content of an operation performed by the user.
Referring back to
The automatic movement flag 307 is a flag used to determine whether or not to execute the above automatic movement control. If the automatic movement flag 307 is ON, it indicates that the automatic movement control is to be performed.
The automatic movement control data 308 is various kinds of data used when the automatic movement control is performed. Specifically, the automatic movement control data 308 includes path information indicating a path to the above cursor point, etc.
The automatic rotation flag 309 is a flag used to determine whether or not to execute the above automatic rotation control. If the automatic rotation flag 309 is ON, it indicates that the automatic rotation control is to be performed.
The automatic rotation control data 310 is data used when the automatic rotation control is performed. The automatic rotation control data 310 includes data indicating a “target direction” which is the direction to the cursor point as viewed from the position of the player character, data indicating the rotation speed per unit time, the movement direction, and the movement speed of the virtual camera during the automatic rotation control, etc.
In addition, various kinds of data required for the game processing, which are not shown, are also stored in the DRAM 85.
Next, the details of the virtual camera control processing in the exemplary embodiment will be described. In the exemplary embodiment, a flowchart described below is realized by one or more processors reading and executing the above program stored in one or more memories. The flowchart described below is merely an example of the processing. Therefore, the order of each process step may be changed as long as the same result is obtained. In addition, the values of variables and thresholds used in determination steps are also merely examples, and other values may be used as necessary.
When the game processing starts, first, in step S1, the processor 81 initializes various kinds of data, and takes an image of the virtual space from the position of the virtual camera determined on the basis of the position of the player character. Furthermore, the processor 81 displays the taken image as the above normal screen. Then, the processor 81 waits for an operation from the user.
Next, in step S2, the processor 81 determines whether or not the map flag 306 is ON. As a result of the determination, if the map flag 306 is not ON (NO in step S2), in step S3, the processor 81 performs normal screen processing. On the other hand, if the map flag 306 is ON (YES in step S2), in step S4, the processor 81 performs map screen processing. The details of each processing will be described below.
Next, in step S13, the processor 81 determines whether or not an operation for shifting to the map screen has been performed, on the basis of the operation data 305. As a result of the determination, if the operation for shifting to the map screen has not been performed (NO in step S13), next, in step S14, the processor 81 determines whether or not the automatic movement flag 307 is ON. As a result of the determination, if the automatic movement flag 307 is OFF (NO in step S14), next, in step S15, the processor 81 determines whether or not an operation for moving the player character has been performed, on the basis of the operation data 305. If the operation for moving the player character has been performed (YES in step S15), in step S16, the processor 81 controls the movement of the player character on the basis of the operation data 305. On the other hand, if the operation for moving the player character has not been performed (NO in step S15), the process in step S16 above is skipped.
Next, in step S17 in
Next, in step S18, the processor 81 controls actions of various objects other than the player character.
Next, in step S19, the processor 81 takes an image of the virtual space by the virtual camera. Furthermore, the processor 81 generates a game image to be displayed as the above normal screen, on the basis of the taken image. Then, the processor 81 outputs the game image to the display 12 or a stationary monitor or the like. Thereafter, the processor 81 ends the normal screen processing.
Next, processing performed if, as a result of the determination in step S11 above, the automatic rotation flag 309 is ON (YES in step S11), will be described. In this case, the current state is considered to be a state after the display of the map screen is ended by the automatic rotation instruction operation. Therefore, in step S21, the processor 81 executes automatic rotation control processing for performing the automatic rotation control as described above.
Next, in step S32, the processor 81 determines whether or not the front side of the player character is facing the above target direction. As a result of the determination, if the front side of the player character is not facing the target direction (NO in step S32), in step S33, the processor 81 changes the orientation of the player character such that the player character faces the target direction. On the other hand, if the front side of the player character is facing the target direction (YES in step S32), the process in step S33 above is skipped.
Next, in step S34, the processor 81 determines whether or not the direction of the virtual camera has become the above target direction. As a result of the determination, if the direction of the virtual camera has become the target direction (YES in step S34), in step S35, the processor 81 sets the automatic rotation flag 309 to be OFF. On the other hand, if the direction of the virtual camera has not become the target direction (NO in step S34), the process in step S35 above is skipped.
Then, the processor 81 ends the automatic rotation control processing.
Referring back to
Next, processing performed if, as a result of the determination in step S13 above, the operation for shifting to the map screen has been performed (YES in step S13), will be described. In this case, in step S22, the processor 81 sets the map flag 306 to be ON. Next, in step S23, the processor 81 generates and displays a map screen as described above, on the basis of the current position of the player character. For example, the processor 81 determines the display range of the map image on the map screen such that the display position of the player icon is at substantially the center of the map screen. Then, the processor 81 displays the map image in the determined range. Then, the processor 81 ends the normal screen processing.
Next, processing performed if, as a result of the determination in step S14 above, the automatic movement flag 307 is ON (YES in step S14), will be described. In this case, the current state is considered to be a state after the display of the map screen is ended by the automatic movement instruction operation. In this case, first, in step S24, the processor 81 moves the player character on the basis of the automatic movement control data 308. Furthermore, the processor 81 controls the virtual camera such that the virtual camera moves so as to follow the player character from behind.
Next, in step S25, the processor 81 determines whether or not the automatic movement cancellation operation has been performed, on the basis of the operation data 305. As a result of the determination, if the automatic movement cancellation operation has been performed (YES in step S25), in step S26, the processor 81 sets the automatic movement flag 307 to be OFF. Furthermore, the processor 81 stops the movement of the player character. Then, the processor 81 advances the processing to step S18 above. On the other hand, if the automatic movement cancellation operation has not been performed (NO in step S25), the process in step S26 above is skipped. Then, the processor 81 advances the processing to step S18 above.
In the exemplary embodiment, the example in which no operation other than the automatic movement cancellation operation is accepted during the automatic movement control has been described. In this regard, in another exemplary embodiment, actions other than movement may be enabled even during the automatic movement control.
This is the end of the detailed description of the normal screen processing.
Next, the details of the map screen processing in step S4 above will be described.
On the other hand, as a result of the determination in step S42 above, if the operation for moving the cursor has not been performed (NO in step S42), in step S44, the processor 81 determines whether or not an operation for scrolling the map screen has been performed, on the basis of the operation data 305. If the scrolling operation has been performed (YES in step S44), in step S45, the processor 81 scrolls the map screen according to the content of the operation. Accordingly, the user is allowed to designate, for example, a point that is not displayed on the normal screen, or a point that is far away from the current position of the player character, with the cursor. Then, the processor 81 advances the processing to step S58 described later.
On the other hand, as a result of the determination in step S44 above, if the scrolling operation has not been performed (NO in step S44), in step S46, the processor 81 determines whether or not the automatic rotation instruction operation has been performed, on the basis of the operation data 305. In this example, whether or not the ZL-button 39 has been pressed is determined. As a result of the determination, if the automatic rotation instruction operation has been performed (YES in step S46), in step S47, the processor 81 sets the automatic rotation flag 309 to be ON.
Next, in step S48, the processor 81 determines the above target direction on the basis of the coordinates of the cursor on the map screen. Specifically, the processor 81 calculates the position, in the virtual space, corresponding to the coordinates of the cursor on the map screen, as the above cursor point. Next, the processor 81 determines the above target direction on the basis of the current position of the player character in the virtual space and the above cursor point.
Next, in step S49, the processor 81 sets the automatic rotation control data 310 on the basis of the above target direction. Specifically, the processor 81 calculates an amount of change in direction per unit time such that the direction of the virtual camera changes from the current direction to the target direction over a predetermined time. In addition, as described above, as the direction of the virtual camera changes, the processor 81 changes the position of the virtual camera, so that the processor 81 also calculates an amount of movement per unit time for the virtual camera. Then, the processor 81 sets data indicating the calculation results, as the automatic rotation control data 310. Then, the processor 81 advances the processing to step S55 described later.
On the other hand, as a result of the determination in step S46 above, if the automatic rotation instruction operation has not been performed (NO in step S46), in step S50 in
On the other hand, if the cursor point is at a location to which the player character can move (YES in step S51), in step S53, the processor 81 sets the automatic movement control data 308 on the basis of the coordinates of the cursor on the map screen. Specifically, the processor 81 calculates the above cursor point on the basis of the coordinates of the cursor. Next, the processor 81 calculates the path from the current position of the player character to reach the cursor point. Then, the processor 81 sets information indicating this path, as the automatic movement control data 308.
Next, in step S54, the processor 81 sets the automatic movement flag 307 to be ON.
Next, in step S55, the processor 81 terminates the map screen and displays the normal screen. The normal screen displayed at this time is based on an image taken at the position and the direction of the virtual camera that are immediately before the map screen is displayed.
Next, in step S56, the processor 81 sets the map flag 306 to be OFF. Then, the processor 81 ends the map screen processing.
On the other hand, as a result of the determination in step S50 above, if the automatic movement instruction operation has not been performed (NO in step S50), in step S57, the processor 81 determines whether or not the above simple end operation has been performed, on the basis of the operation data 305. In this example, whether or not the B-button 54 has been pressed is determined. As a result of the determination, if the simple end operation has been performed (YES in step S57), the processor 81 advances the processing to step S55 above. As a result, the map screen is terminated. On the other hand, if the simple end operation has not been performed (NO in step S57), in step S58, the processor 81 displays the map screen. At this time, if a message indicating that the automatic movement control cannot be performed has been set to be displayed in step S52 above, this message is also displayed on the map screen. In addition, if the cursor has been moved or a screen scrolling operation has been performed, the map screen reflecting the content of this operation is displayed. Then, the processor 81 ends the map screen processing.
Referring back to
This is the end of the detailed description of the game processing according to the exemplary embodiment.
As described above, in the exemplary embodiment, in the game in which it is possible to switch between the normal screen and the map screen as described above, when the screen returns from the map screen to the normal screen, control in which the direction of the virtual camera is turned toward a point designated on the map screen is performed. In addition, a state where the direction is not instantly changed but is changed over a certain time is shown to the user. This allows the user to grasp how the imaging direction has changed.
In the case of performing the game processing using touch operations on a hand-held information processing apparatus such as a smartphone, for example, operations such as changing the imaging direction of the virtual camera with a touch operation and returning the imaging direction to the original direction with a touch-off operation may be employed. Assuming such operations, it may be difficult to grasp which direction the virtual camera is facing when the direction of the virtual camera is changed, since the display screen is relatively small in size and the screen may be hidden by fingers during touch operations, for example. In this regard, the processing of the exemplary embodiment makes it easier to grasp how the direction of the virtual camera has been changed, so that the processing as in the exemplary embodiment is also useful for game processing using touch operations such as those described above, etc.
In addition, by moving the cursor on the map screen displaying the virtual space as viewed from above, the direction in which the virtual camera faces can be designated, so that the direction in which it is desired to cause the virtual camera to face can be designated accurately and easily.
In the above embodiment, the frustum image is displayed on the map screen. The frustum image is controlled so as to face the position of the cursor on the map screen, and, from this, it can be said that the frustum image indicates the direction of the virtual camera (the above target direction) by the automatic rotation control. In this regard, in another exemplary embodiment, the frustum image may be directly rotatable around a user icon, for example, by operating the right stick 52. For example, by making the frustum image operable on the basis of an operation performed by the user without using a cursor as described above on the map screen, it is made possible for the user to directly designate the above target direction.
In addition, the image indicating the direction of the virtual camera on the map screen is not limited to the fan-shaped frustum image as described above, and, for example, an arrow image or the like may be used.
In the above embodiment, the game processing in which the above normal screen is a third-person viewpoint screen has been exemplified, but the above-described processing can also be applied to information processing in which the above normal screen is a first-person viewpoint screen. In the case of a first-person viewpoint screen, as the above automatic rotation control, control in which the position of the virtual camera is not changed and only the angle of the yaw component is changed, may be performed.
In the above example, the direction of the virtual camera on the normal screen is substantially horizontal to the ground. The present disclosure is not limited thereto, and in another exemplary embodiment, the direction of the virtual camera on the normal screen may be any direction as long as the direction of the virtual camera is a direction other than the vertical direction (directly downward direction).
In the case of a game in which the direction of the virtual camera can be the vertically downward direction on the normal screen, the following processing may be performed. For example, game processing in which the position and the orientation of the virtual camera can be controlled on the normal screen by an operation performed by the user, is assumed. In such a game, by moving the virtual camera upward in the virtual space while the player character is set as the gazing point, the direction of the virtual camera is changed to the vertically downward direction, and as a result, the normal screen may be a screen displaying the virtual space as viewed from above as shown in
In the above embodiment, the case where the series of processes related to the game processing is performed in the single main body apparatus 2 has been described. However, in another embodiment, the above series of processes may be performed in an information processing system that includes a plurality of information processing apparatuses. For example, in an information processing system that includes a terminal side apparatus and a server side apparatus capable of communicating with the terminal side apparatus via a network, a part of the series of processes may be performed by the server side apparatus. Alternatively, in an information processing system that includes a terminal side apparatus and a server side apparatus capable of communicating with the terminal side apparatus via a network, a main process of the series of the processes may be performed by the server side apparatus, and a part of the series of the processes may be performed by the terminal side apparatus. Still alternatively, in the information processing system, a server side system may include a plurality of information processing apparatuses, and a process to be performed in the server side system may be divided and performed by the plurality of information processing apparatuses. In addition, a so-called cloud gaming configuration may be adopted. For example, the main body apparatus 2 may be configured to send operation data indicating a user's operation to a predetermined server, and the server may be configured to execute various kinds of game processing and stream the execution results as video/audio to the main body apparatus 2.
While the present disclosure has been described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is to be understood that numerous other modifications and variations can be devised without departing from the scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
2023-017466 | Feb 2023 | JP | national |