Such a technique is known that causes a computer to perform a calculation process, constructs a virtual space (for example, a metaverse or a game space), disposes a character (for example, an avatar or a player character) of a user who is also a player, and provides the user a virtual experience in a virtual space. For example, Japanese Unexamined Patent Application Publication No. 2001-312744 discloses a technique allowing users sharing one virtual space to communicate with each other.
When there are a plurality of virtual spaces, there are needs for reproducing a situation of a second virtual space in a first virtual space for providing further enriched virtual experiences in the virtual spaces. When it is possible to reproduce a situation of gameplay of a battle fighting game in another virtual space different from a virtual space where the battle fighting game is played, for example, a user participating in the other virtual space is able to view the situation of gameplay of the battle fighting game, resulting in a further enriched virtual experience.
One possible method for reproducing a situation in another virtual space is a method for executing processing (for example, motion control for character models or game progress control) executed in the other virtual space in a virtual space serving as a destination of reproduction. This method however involves such an issue that there is an extremely high process load in a computer system controlling a virtual space serving as a destination of reproduction. It is therefore difficult to satisfy such needs with a method for simply reproducing a situation of the second virtual space in the first virtual space. It is thus demanded a new embodiment method making a situation where as if the second virtual space exists in the first virtual space.
The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. These are, of course, merely examples and are not intended to be limiting. In addition, the disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, when a first element is described as being “connected” or “coupled” to a second element, such description includes embodiments in which the first and second elements are directly connected or coupled to each other, and also includes embodiments in which the first and second elements are indirectly connected or coupled to each other with one or more other intervening elements in between.
In accordance with one of some embodiments, there is provided a computer system comprising at least one processor or circuit programmed to execute:
The “computer system” used herein may be implemented by a single computer, of course, or a plurality of computers in a cooperated manner.
According to the disclosure, in some embodiments, a computer system sets a virtual space in a first virtual space, and performs control for expressing an embodiment space embodying a second virtual space in the virtual space. Since, in the first virtual space, the second virtual space is embodied in a virtual space smaller than the first virtual space, it is possible to suppress an embodiment-related process load. It is therefore possible to provide a technique for a new embodiment method making a situation where as if the second virtual space exists in the first virtual space.
A second disclosure is the computer system, wherein performing the space expression control includes disposing, in the virtual space, an object corresponding to the object in the second virtual space in a disposition configuration based on a disposition configuration in the second virtual space to perform control for expressing the embodiment space.
According to the disclosure, in some embodiments, the computer system is able to dispose, in the virtual space, for embodying an object (an embodiment target object) in the second virtual space, an object corresponding to the object in the second virtual space based on a disposition configuration in the second virtual space.
A third disclosure is the computer system, wherein performing the space expression control includes expressing the second virtual space at a calculation amount smaller than a calculation amount when completely reproducing the second virtual space.
According to the disclosure, in some embodiments, the computer system is able to express the second virtual space at a calculation amount smaller than a calculation amount when completely reproducing the second virtual space.
A fourth disclosure is the computer system, wherein performing the space expression control includes performing control for expressing the embodiment space based on a captured image in which the second virtual space is imaged from a given imaging point of view.
A fifth disclosure is the computer system, wherein performing the space expression control includes:
According to the disclosure, in some embodiments, the computer system is able to perform mapping of a captured image in which the second virtual space is imaged from a given imaging point of view onto the virtual object disposed in the first virtual space to express the embodiment space embodying the second virtual space.
A sixth disclosure is the computer system as defined, wherein performing the space expression control includes performing virtual object control for controlling a position and/or an orientation of the virtual object in accordance with a position and/or an orientation of the user's point of view in the first virtual space.
According to the disclosure, in some embodiments, the computer system is able to control the position and the orientation of the virtual object disposed in the first virtual space in accordance with the position and the orientation of the user's point of view in the first virtual space. For example, it is possible to perform control for causing the virtual object to face the user's point of view, and it is possible to omit control for a virtual object positioning outside a field of view of the user's point of view.
A seventh disclosure is the computer system, wherein performing the virtual object control includes performing control for disposing the virtual object at a posture at which a predetermined relative orientation is taken with respect to the user's point of view to follow a change in position and/or a change in orientation of the user's point of view.
According to the disclosure, in some embodiments, the computer system becomes able to perform, for example, disposition control for causing a predetermined surface (for example, a mapping surface that undergoes texture mapping) of the virtual object to continuously face the user's point of view. A virtual object may be a plate-shaped primitive surface.
An eighth disclosure is the computer system, wherein
According to the disclosure, in some embodiments, the computer system is able to express an embodiment space based on a captured image captured from one of a plurality of imaging points of view.
A ninth disclosure is the computer system, wherein the at least one processor or circuit is further programmed to execute imaging point-of-view control for controlling a position and/or an orientation of the imaging point of view in the second virtual space in accordance with a position and/or an orientation of the user's point of view in the first virtual space.
A tenth disclosure is the computer system, wherein
According to the disclosure, in some embodiments, the computer system is able to cause the position and the orientation of the imaging point of view in the second virtual space to respond to a change in position or orientation of the user's point of view in the first virtual space.
Furthermore, according to the disclosure, in some embodiments, it is possible that a captured image is an image in which a participating user in the first virtual space as if stays in and views the second virtual space. It is possible to allow the participating user in the first virtual space to view the second virtual space as if the second virtual space exists at a fixed position in the first virtual space.
An eleventh disclosure is the computer system, wherein performing the space expression control includes disposing, for each of participating users participating in the first virtual space, the user's points of view corresponding to the participating users, performing the rendering processing, and performing control for expressing the embodiment space viewed from each of the user's points of view.
According to the disclosure, in some embodiments, the computer system is able to express an embodiment space viewed from the user's point of view of each participating user in the first virtual space.
A twelfth disclosure is the computer system, wherein
According to the disclosure, in some embodiments, the computer system is able to set a virtual space corresponding to each of a plurality of second virtual spaces in the first virtual space. When the first virtual space is a site of an exhibition, for example, a virtual space corresponds to each pavilion. A virtual experience that is possible to be provided in the first virtual space is thus further enriched.
A thirteenth disclosure is the computer system, wherein the imaging point of view and a participating user's point of view for each of the users participating in the second virtual space differ from each other.
According to the disclosure, in some embodiments, it is possible to separate, in the first virtual space, the imaging point of view for embodying the second virtual space and the point of view for the participating user participating in the second virtual space from each other.
A fourteenth disclosure is the computer system, wherein the second virtual space is a game space for which game progress is controlled based on an operation input of a user participating in the second virtual space.
According to the disclosure, in some embodiments, the computer system is able to express, in the first virtual space, a situation of a game played across the second virtual space.
A fifteenth disclosure is the computer system, wherein a computer for controlling the first virtual space and a computer for controlling the second virtual space are individually configured and provided.
According to the disclosure, in some embodiments, it is possible to perform processing related to the first virtual space and processing related to the second virtual space in respective computers in a dispersed manner.
A sixteenth disclosure is a virtual space control system comprising:
According to the disclosure, in some embodiments, it is possible to acquire working and effects similar or identical to those according to each of the disclosures described above in a system including a server system and a user terminal serving as a man-machine interface.
In accordance with one of some embodiments, there is provided a virtual space control method executed by a computer system, the virtual space control method comprising:
According to the disclosure, in some embodiments, it is possible to achieve a virtual space control method that makes it possible to provide working and effects similar or identical to those according to the first disclosure.
Exemplary embodiments are described below. Note that the following exemplary embodiments do not in any way limit the scope of the content defined by the claims laid out herein. Note also that all of the elements described in the present embodiment should not necessarily be taken as essential elements.
Hereinafter, examples of the embodiments of the present disclosure are described.
Note that modes to which the present disclosure is applicable are not limited to the following embodiments.
The virtual space control system 1000 is a system that simultaneously provides a plurality of users virtual experiences in a virtual space. The virtual space control system 1000 is a computer system including an operation server system 1010 and user terminals 1500 (1500a, 1500b, . . . ) each for each of the users, which are coupled to make data communication possible via a network 9. The user terminal 1500 is a man-machine interface (MMIF).
The network 9 means a communication channel that makes data communication possible. That is, the network 9 includes, for example, a telecommunication network, a cable network, or the Internet, in addition to a private line (a private cable) for direct coupling or a local area network (LAN) based on Ethernet (registered trademark).
The operation server system 1010 is a computer system that a service provider or a system operator manages and operates, and includes a main server system 1100P and a plurality of sub-server systems 1100G (1100Ga, 1100Gb, . . . ). The main server system 1100P and the sub-server systems 1100G (1100Ga, 1100Gb, . . . ) are able to perform data communication via the network 9 with each other, and are each able to perform data communication via the network 9 with each of the user terminals 1500.
The main server system 1100P is a computer system controlling and managing a first virtual space, and is a server system to which the user terminals 1500 first access for utilizing various types of services related to the virtual space control system 1000.
The sub-server systems 1100G (1100Ga, 1100Gb, . . . ) each individually control and manage a second virtual space, communicate with one or a plurality of the user terminals 1500, and each function as a game server where the user terminals 1500 serve as game clients.
The main server system 1100P and the sub-server systems 1100G each have basic functions as computers.
That is, the main server system 1100P and the sub-server systems 1100G are each mounted with a main body device, a keyboard, and a touch panel, and a control board 1150 is mounted on the main body device. The control board 1150 is mounted with, for example, a microprocessor that varies in type such as a central processing unit (CPU) 1151, a graphics processing unit (GPU), or a digital signal processor (DSP), an integrated circuit (IC) memory 1152 that varies in type such as a video random access memory (VRAM), a random access memory (RAM), or a read-only memory (ROM), and a communication device 1153. Note that the control board 1150 may be implemented partially or entirely by an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a system on a chip (SoC). However, through a calculation process performed by the control board 1150 based on a predetermined program and data, the main server system 1100P and the sub-server systems 1100G implement functions different from each other.
The main server system 1100P and the sub-server systems 1100G, which are each illustrated as one server device in
The user terminal 1500 is a computer system used by each of the users to utilize the virtual space control system 1000. The user terminal 1500 functions as a man-machine interface (MMIF) in the virtual space control system 1000.
The user terminal 1500, which is illustrated as a device called a smartphone in
The user terminal 1500 includes an operation input device (for example, a touch panel 1506, a keyboard, a game controller, or a mouse), an image display device (for example, the touch panel 1506, a head-mounted display, or a glasses type display), and a control board 1550.
The control board 1550 includes, for example, a CPU 1551, a microprocessor that varies in type such as a GPU or a DSP, an IC memory 1552 that varies in type such as a VRAM, a RAM, or a ROM, and a communication module 1553 coupled to the network 9. These elements mounted on the control board 1550 are electrically coupled to each other via a bus circuit, for example, and are thus coupled to make reading of a piece of data and transmission and reception of a signal possible. The control board 1550 may be partially or entirely implemented by an ASIC, an FPGA, or an SoC. The control board 1550 then causes the IC memory 1552 to store programs and various types of data for implementing a function as the user terminal. The user terminal 1500 then executes a predetermined application program to implement a function as the man-machine interface (MMIF) for the virtual space control system 1000.
The user terminal 1500, which is configured to download the application program and various types of data necessary for executing the application program from the main server system 1100P and the sub-server systems 1100G, may be configured to read the application program and various types of data necessary for executing the application program from a storage medium such as a memory card that each of the users acquires separately.
That is, the second virtual space 12 is a game space in which game progress is controlled based on an operation input performed by the user participating in the second virtual space 12 (hereinafter referred to as a “second participating user”). The sub-server system 1100G treats the user who has play-logged in as a player, disposes a player character 4 in the second virtual space 12, and controls motion of the player character 4 in accordance with an operation input detected in the user terminal 1500 of the user. Furthermore, a non-player character (NPC) is automatically controlled. That is, the sub-server system 1100G functions as a game server for a client-server type system, and provides the second participating user operating the user terminal 1500 serving as a game client a virtual experience in the online game.
In the example illustrated in
Note that a game genre of an online game that the sub-server system 1100G implements is not limited to a fighting battle game, and it is possible to set a desired game genre as appropriate. For example, the game genre may be a multiplayer online role playing game (MORPG), a racing game, a sport game such as soccer or baseball, or a strategy simulation game. Furthermore, it is not limited to an online game, and may be a virtual live concert or a caring game, for example.
In the second virtual space 12, a participating user's point of view C2, which is an imaging point of view for the second participating user, is set, and an image of the second virtual space 12 (a virtual space image), which is captured from the participating user's point of view C2, is rendered. A game screen that is added, as appropriate, with information display for gameplay on the virtual space image, is then generated. The game screen is, as illustrated in
It is possible to set, as appropriate, the participating user's point of view C2 in accordance with a game genre. Since it is a game screen for the fighting battle game in the example illustrated in
The main server system 1100P controls, in accordance with various types of operations that the first participating user has inputted to the user terminal 1500, movement and motion of the corresponding avatar 8.
A user's point of view C1 for the first participating user using the avatar 8 is set at a predetermined position on a head of the avatar 8. As the avatar 8 is moved, and a posture of the avatar 8 is changed, the user's point of view C1 is also moved in a linked manner, and its line-of-sight direction (the orientation of the field of view) is changed.
The main server system 1100P renders a situation of the first virtual space 11 imaged from the user's point of view C1 for each avatar 8 (for each first participating user) (rendering processing). An image illustrating the situation of the first virtual space 11 that the avatar 8 views, that is, a first virtual space screen W1, as illustrated in
When it is assumed that the first virtual space 11 correspond to a metaverse, the first participating user is able to enjoy a pseudo-life type virtual experience resembling a real life.
As one of features of the present embodiment, one or a plurality of virtual spaces 13 are prepared in the first virtual space 11, as illustrated in
It is possible to set, as appropriate, a total number of virtual spaces 13 set in the first virtual space 11, or a position, a shape, or a size of each of the virtual spaces 13 in the first virtual space 11.
It is possible to set, as appropriate, a total number of embodiment spaces 14 set in each of the virtual spaces 13, or a position, a shape, or a size of each of the embodiment spaces 14 in the first virtual space 11.
Now back to
A virtual object 15 is an object having a simple shape where there is a smaller number of component polygons. For example, it is possible to form a virtual object with a primitive surface, or a virtual object may be formed with one plate-shaped polygon.
Each virtual object 15 is associated with one embodiment target object (an object that is a target to be embodied) in the second virtual space 12 (a virtual space that is a target to be embodied) associated with its embodiment space 14.
An “embodiment target object” refers to, when a thing in the second virtual space 12 is to be embodied in the first virtual space 11, an object selected from among various types of objects disposed in the second virtual space 12. Since the fighting battle game undergoes progress control in the second virtual space 12, player characters 4 (4a and 4b) and an item 5 that the player character 4a holds in the second virtual space 12 serve as embodiment target objects. Virtual objects 15 (15a, 15b, and 15c) corresponding to the embodiment target objects are then disposed in the first virtual space 11.
The virtual objects 15 undergo posture control to cause their mapping surfaces to undergo texture mapping of embodiment images 17 (17a, 17b, and 17c) that are images of the corresponding embodiment target objects to cause a normal direction of each of the mapping surfaces to be directed to the corresponding user's point of view C1. The virtual objects 15 therefore each undergo posture control to have a predetermined relative orientation with respect to the line-of-sight direction of the user's point of view C1. So-called billboard processing is thus executed. Furthermore, a disposition configuration of the virtual objects 15 in the virtual space 13 undergoes link control with respect to a disposition configuration of the corresponding embodiment target objects in the second virtual space 12.
The main server system 1100P requests, when a new embodiment space 14 enters within the field of view of the user's point of view C1 of the avatar 8, the sub-server system 1100G managing the second virtual space 12 corresponding to the embodiment space 14 for setting an embodiment point of view C3 corresponds to the user's point of view C1 of the avatar 8.
The embodiment point of view C3 is an imaging point of view when capturing a captured image serving as a source of an embodiment image 17. The embodiment point of view C3 is a point of view at which the line-of-sight direction of the user's point of view C1 is copied at the position of the user's point of view C1 when the avatar 8 is assumed to exist in the second virtual space 12. As the avatar 8 moves in the first virtual space 11 or changes the line-of-sight direction, the corresponding embodiment point of view C3 is then controlled in a linked manner to change the position in the second virtual space 12 or to change the line-of-sight direction.
Specifically, the position and the line-of-sight direction of the embodiment point of view C3 are set by converting the position and the line-of-sight direction of the user's point of view C1 of the corresponding avatar 8 based on a coordinate transformation matrix between the coordinate system for the first virtual space 11 and the coordinate system for the second virtual space 12. Note that an imaging angle of view of the embodiment point of view C3 is set identically or substantially identically to that of the user's point of view C1 of the corresponding avatar 8.
The sub-server system 1100G generates captured images 18 (18a, 18b, and 18c) for each of the embodiment target objects in the second virtual space 12, which are associated with the virtual objects 15, from the embodiment point of view C3.
The captured images 18, which are each illustrated in a rectangular with a thick line in
For example, the captured image 18a of the player character 4a that is the embodiment target object is created by rendering only its character when the player character 4a is imaged from the embodiment point of view C3. No background is rendered. Otherwise, the captured image 18a may be created by first rendering a whole field-of-view image of the embodiment point of view C3, and then cutting only a part where the player character 4a is rendered from the whole field-of-view image. The captured image 18b of the player character 4b and the captured image 18c of the item 5 are also created in a similar manner. Note that, when the item 5 is held by or is in contact with the player character 4a, both the item 5 and the player character 4a may be considered as one virtual object 15, and may be treated as one captured image 18.
These captured images 18 (18a, 18b, and 18c) are not images for the online game. Additional processing is therefore necessary for the sub-server system 1100G for embodying the second virtual space 12. However, the embodiment target objects serving as rendering targets are only some of all the objects in the second virtual space 12, resulting in a far smaller process load along with rendering of the captured images 18 than that when all the objects are rendered.
As the captured images 18 based on the embodiment point of view C3 are generated, the sub-server system 1100G transmits their pieces of data to the main server system 1100P. The main server system 1100P performs, as illustrated in
The “calculation amount reduction processing” is processing for reducing a calculation amount that is necessary when generating a first virtual space screen W1, and refers to, for example, image quality reduction processing for lowering image quality from high definition (HD) image quality to standard definition (SD) image quality (for example, reducing a number of colors or reducing a resolution). Texture mapping of the embodiment images 17 is then performed on the virtual objects 15.
Now back to
The size and the shape of each of the virtual objects 15 may be acquired, for example, as a rectangular having an upper-lower width and a left-right width identical to an upper-lower width and a left-right width of each of the embodiment images 17 (otherwise, a slightly larger rectangular). Otherwise, the size and the shape of each of the virtual objects 15 may be acquired by projecting a boundary box used for contact determination of the object of the player character 4 onto a normal surface of the embodiment point of view C3 (a surface on which the line-of-sight direction serves as a normal line).
As a result, a situation of the corresponding second virtual space 12 is embodied in the embodiment space 14 in the virtual space 13, which has entered the field of view of the avatar 8 (8a) in the first virtual space screen W1 (W1a and W1b) (see
In particular, the virtual objects 15 (15a, 15b, and 15c) are formed by performing texture mapping of the embodiment images 17 (17a, 17b, and 17c) onto plate-shaped polygons, resulting in an extremely low calculation amount.
As the player character 4a, the player character 4b, and the item 5 move in the second virtual space 12, the virtual objects 15 (15a, 15b, and 15c) also change in position in a linked manner.
Control for expressing an embodiment space 14 embodying the second virtual space 12 in the first virtual space 11 is set for each avatar 8.
When the avatar 8b that differs from the avatar 8a (see
When a first virtual space screen W1 for the avatar 8b is to be generated, the virtual objects 15 (15a, 15b, and 15c) for the avatar 8a are once excluded from rendering targets (rendering OFF), and are then rendered as images of the first virtual space 11 imaged from the user's point of view C1b. When a first virtual space screen W1 for the avatar 8a is to be generated, on the other hand, the virtual objects 15 (15d, 15e, and 15f) for the avatar 8b are once excluded from rendering targets (rendering OFF), and are then rendered as images of the first virtual space 11 imaged from the user's point of view C1a.
The main server system 1100P is thus able to allow a situation of the second virtual space 12 to be embodied in the first virtual space 11 without artificiality when viewed from all the avatars 8 in the first virtual space 11.
Next, a functional configuration will now be described herein.
The main server system 1100P includes an operation input section 100p, a processing section 200p, a sound output section 390p, an image display section 392p, a communication section 394p, and a storage section 500p.
The operation input section 100p is a means for inputting various types of operations for managing the main server system 1100P. For example, the operation input section 100p corresponds to a keyboard, a touch panel, or a mouse.
The processing section 200p is implemented, for example, by a processor serving as a calculation circuit such as a CPU, a GPU, an ASIC, or an FPGA and an electronic component such as an IC memory, and performs input-and-output control for data among functional sections including the operation input section 100p and the storage section 500p. Various types of calculation processes are then executed based on a predetermined program and data, an operation input signal from the operation input section 100p, or data received from the user terminals 1500 and the sub-server systems 1100G (1100Ga, 1100Gb, . . . ), for example, to comprehensively control operation of the main server system 1100P.
The processing section 200p includes a user management section 202, a first virtual space control section 210, a timer section 280p, a sound generation section 290p, an image generation section 292p, and a communication control section 294p. Other functional sections may be included as appropriate, of course.
The user management section 202 performs processing related to a user registration procedure, storage management of various types of information associated with a user account, or processing for system login or system logout.
The first virtual space control section 210 performs various types of control related to the first virtual space 11.
The first virtual space control section 210 includes a virtual space setting section 212 and a space expression control section 214.
The virtual space setting section 212 sets a virtual space 13 for expressing an embodiment space 14 in the first virtual space 11. When a plurality of second virtual spaces 12 exist, the virtual space setting section 212 sets a virtual space 13 for each of the second virtual space 12 in the first virtual space 11.
The space expression control section 214 disposes an object in the virtual space 13 based on information of an object in the second virtual space 12, and performs control for expressing an embodiment space 14 embodying the second virtual space 12 based on a captured image 18 in which the second virtual space 12 is imaged from a given imaging point of view (the embodiment point of view C3 illustrated in
Specifically, the space expression control section 214 expresses the embodiment space 14 by associating a coordinate of the virtual space 13 in the first virtual space 11 and a coordinate of the second virtual space 12 with each other to express the embodiment space 14 in which the second virtual space 12 is fixedly embodied in the virtual space 13. The space expression control section 214 then disposes, for each participating user participating in the first virtual space 11, the user's point of view C1 corresponding to the participating user. The space expression control section 214 then performs rendering processing on each of the virtual spaces 13 within the field of view of the user's point of view C1 corresponding to the participating user to perform control for expressing the embodiment space 14 viewed from each user's point of view.
Furthermore, the space expression control section 214 disposes virtual objects 15 corresponding to objects in the second virtual space 12 in a disposition configuration based on a disposition configuration in the second virtual space 12 to perform control for expressing an embodiment space. Specifically, the space expression control section 214 disposes the virtual objects 15 within the field of view of the user's point of view C1 in the first virtual space 11. The space expression control section 214 then performs rendering processing for rendering the virtual objects 15 onto which mapping of the images (the embodiment images 17 illustrated in
The space expression control section 214 includes a virtual object control section 216.
The virtual object control section 216 controls a position and/or an orientation of each of the virtual objects 15 in accordance with the position and/or the orientation of the user's point of view C1 in the first virtual space 11. Specifically, the virtual object control section 216 performs control for disposing each of the virtual objects 15 in a posture at a predetermined relative orientation with respect to the user's point of view C1 to follow a change in position and/or a change in orientation of the user's point of view C1. The billboard processing onto each of the virtual objects 15 corresponds to the processing described above (see
The timer section 280p utilizes a system clock to perform various types of time measurements such as a current date and time or a limited time period.
The sound generation section 290p is implemented by an IC or through execution of software that generates sound data or performs decoding. The sound generation section 290p outputs a generated sound signal to the sound output section 390p. The sound output section 390p is implemented by a speaker, for example, and emits sound based on the sound signal.
The image generation section 292p generates images of various types of management screens for system management of the main server system 1100P, and outputs image data to the image display section 392p. The image display section 392p is implemented by a flat panel display, a head-mounted display, or a projector, for example.
The communication control section 294p executes data processing related to data communication, and implements data exchange with an external device via the communication section 394p. The communication section 394p is coupled to the network 9 to implement communication. For example, the communication section 394p is implemented by a wireless communication device, a modem, a terminal adaptor (TA), a jack for wired communication cable, or a control circuit. In the example illustrated in
The storage section 500p stores programs and various types of data for implementing various types of functions for causing the processing section 200p to comprehensively control the main server system 1100P. The storage section 500p is used as a work area for the processing section 200p, and temporarily stores, for example, results of calculations executed by the processing section 200p in accordance with the various types of programs. This function is implemented, for example, by an IC memory such as a RAM or a ROM, a magnetic disc such as a hard disk, an optical disc such as a compact disc read-only memory (CD-ROM) or a digital versatile disc (DVD), or an online storage. In the example illustrated in
The storage section 500p stores a main server program 501, a distribution purpose first client program 502, sub-server registration data 510, user management data 520, first virtual space control data 522, and a current date and time 900. Note that the storage section 500p stores other programs and data (for example, a timer, a counter, or various types of flags) as appropriate.
The main server program 501 is a program read and executed by the processing section 200p to cause the main server system 1100P to function as the user management section 202 or the first virtual space control section 210, for example.
The distribution purpose first client program 502 is an application program provided to and executed by the user terminals 1500, and is an original of a client program for utilizing the first virtual space 11.
The sub-server registration data 510 is prepared for each of the sub-server systems 1100G. The sub-server registration data 510 includes a unique server ID, a virtual space ID uniquely set to the second virtual space 12 managed by the sub-server system 1100G, and server access information that is necessary for coupling to the sub-server system 1100G to make data communication possible. Other types of data may be included as appropriate, of course.
The user management data 520 is prepared for each user having undergone a registration procedure, stores various types of data related to the user, and is managed by the user management section 202. One piece of the user management data 520 includes, for example, a user account unique to the user, game saving data, and participation history data (for example, dates and times of login and logout). Other types of data may be included as appropriate, of course.
The first virtual space control data 522 stores various types of data related to control of the first virtual space 11. For example, the first virtual space control data 522 stores avatar management data 524 for each avatar 8, virtual space management data 530, and first virtual space screen display control data 550.
The avatar management data 524 includes, for example, a user account indicating a first participating user using the avatar 8, a user's point-of-view position and a user's line-of-sight direction of the user's point of view C1 of the avatar 8, and avatar object control data for controlling an object of the avatar 8.
The virtual space management data 530 is created for each virtual space 13, and stores various types of data related to the virtual space 13. One piece of the virtual space management data 530 stores, for example, as illustrated in
The virtual space definition data 533 indicates a location and a shape of the virtual space 13 set in the first virtual space 11. For example, a position coordinate of a representative point in the virtual space 13 in a first virtual space coordinate system, and boundary setting data for setting a boundary of the virtual space 13 (for example, a coordinate of each top on a contour of the boundary) are included.
One piece of the embodiment space management data 540 includes a management server ID 541, embodiment space definition data 542, and coordinate transformation definition data 544.
The management server ID 541 indicates the sub-server system 1100G controlling and managing the second virtual space 12 embodied in the embodiment space 14.
The embodiment space definition data 542 indicates a location and a shape of the embodiment space 14 set in the virtual space 13. For example, data defining a position and a size of the embodiment space 14 in the first virtual space 11 (for example, a position coordinate of a representative point or boundary setting data) is included.
The coordinate transformation definition data 544 indicates a transformation matrix from a coordinate system for the first virtual space 11 where the embodiment space 14 exists (the first virtual space coordinate system) to a coordinate system for the second virtual space 12 embodied in the embodiment space 14 (an original virtual space coordinate system).
Now back to
The first virtual space screen display control data 550 stores, for example, as illustrated in
The registration embodiment space management data 560 is created each time a new embodiment space 14 entering the field of view of the avatar 8 is detected, and stores various types of data related to the embodiment space 14. One piece of the registration embodiment space management data 560 includes a management server ID 561, an applied embodiment point-of-view ID 562, an embodiment target object list 564, and space expression control data 566.
The management server ID 561 indicates the sub-server system 1100G controlling and managing the second virtual space 12 serving as the source of the embodiment space 14.
The applied embodiment point-of-view ID 562 indicates the embodiment point of view C3 set in the second virtual space 12 for the sub-server system 1100G that the management server ID 561 indicates, that is, the embodiment point of view C3 of the avatar 8 that the avatar ID 551 indicates.
The embodiment target object list 564 is a list of object IDs of embodiment target objects in the second virtual space 12 embodied in the embodiment space 14. As a predetermined request is transmitted from the main server system 1100P to the sub-server system 1100G that the management server ID 561 indicates, provision of the embodiment target object list 564 is received from the sub-server system 1100G. The list serves as sources of virtual objects 15 (see
The space expression control data 566 represents a group of pieces of data for controlling expression of the embodiment space 14.
For example, the space expression control data 566 includes virtual object control data 570 for each object in the embodiment target object list 564.
One piece of the virtual object control data 570 includes a unique virtual object ID 572, an embodiment target object ID 574 indicating a target embodied by each of the virtual objects 15, object shape data 576, a disposed position 578 in the first virtual space 11, a posture of disposition 580, captured image data 584 for an embodiment target object that the embodiment target object ID 574 indicates, and embodiment image data 586 with which texture mapping is performed onto each of the virtual objects 15.
The posture of disposition 580 indicates an orientation of each of the virtual objects 15. When a virtual object 15 has a plate-shaped polygon, a normal direction with respect to a mapping surface undergoing texture mapping is indicated.
The operation input section 100g is a means for inputting various types of operations for managing the sub-server system 1100G. For example, the operation input section 100g corresponds to a keyboard, a touch panel, a mouse, or a VR controller.
The processing section 200g is implemented, for example, by a processor serving as a calculation circuit such as a CPU, a GPU, an ASIC, or an FPGA and an electronic component such as an IC memory, and performs input-and-output control for data among functional sections including the operation input section 100g and the storage section 500g. Various types of calculation processes are then executed based on a predetermined program and data, an operation input signal from the operation input section 100g, or data received from the user terminals 1500 and the main server system 1100P, for example, to comprehensively control operation of the sub-server system 1100G.
The processing section 200g includes a second virtual space control section 230, a timer section 280g, a sound generation section 290g, an image generation section 292g, and a communication control section 294g. Other functional sections may be included as appropriate, of course.
The second virtual space control section 230 performs various types of control related to the second virtual space 12. Since the second virtual space 12 is a game space, the second virtual space control section 230 implements a function as a game server. For example, various types of control related to participation registration control (play login) for a user (a second participating user) who is also a player, control for the player character 4, game progress control, control for an NPC, or management of a background object are executed. Then, the second virtual space control section 230 includes an imaging point-of-view control section 232 and a captured image generation control section 234.
The imaging point-of-view control section 232 sets an imaging point of view in the second virtual space 12, and controls its position or its line-of-sight direction (an orientation). Specifically, the imaging point-of-view control section 232 sets and controls the participating user's point of view C2 or the embodiment point of view C3 (see
For the embodiment point of view C3, a position and/or an orientation of the embodiment point of view C3 for the avatar 8 is further controlled in accordance with the position and/or the orientation of the user's point of view C1 corresponding to the avatar 8 of the second participating user in the first virtual space 11. That is, as the avatar 8 moves in the first virtual space 11, and the position and the line-of-sight direction of the user's point of view C1 change, the embodiment point of view C3 undergoes trace control to cause the corresponding embodiment point of view C3 to similarly move in the second virtual space 12, and to cause its position and its line-of-sight direction to change.
The captured image generation control section 234 performs, for each embodiment point of view C3, control related to generation of a captured image 18 in which an embodiment target object is imaged (see
The timer section 280g utilizes a system clock to perform various types of time measurements such as a current date and time or a limited time period.
The sound generation section 290g is implemented by an IC or through execution of software that generates sound data or performs decoding, and generates or decodes sound data of operational sounds, sound effects, background music (BGM), or voice speech, for example, related to system management of the sub-server system 1100G or provision of the online game. Then, a sound signal related to system management is outputted to the sound output section 390g. The sound output section 390g is implemented by a speaker, for example, and emits sound based on the sound signal.
The image generation section 292g generates images of various types of management screens for system management of the sub-server system 1100G, and outputs display control signals for displaying the generated images to the image display section 392g. The image display section 392g is implemented by a device for displaying an image, such as a flat panel display, a head-mounted display, or a projector.
Furthermore, the image generation section 292g performs generation of an image related to gameplay. For example, rendering of an image in which the second virtual space 12 is imaged from the participating user's point of view C2 (rendering processing) and generation of a second virtual space screen W2 to be displayed on each of the user terminals 1500 are performed (see
Furthermore, the image generation section 292g performs rendering of an image (a captured image 18; see
The communication control section 294g implements data exchange with an external device via the communication section 394g. The communication section 394g is coupled to the network 9 to implement communication. For example, the communication section 394g is implemented by a wireless communication device, a modem, a terminal adaptor (TA), a jack for wired communication cable, or a control circuit. In the example illustrated in
The storage section 500g stores, for example, programs and various types of data for implementing various types of functions for causing the processing section 200g to comprehensively control the sub-server system 1100G. Furthermore, the storage section 500g is used as a work area for the processing section 200g, and temporarily stores, for example, results of calculations executed by the processing section 200g in accordance with various types of programs. This function is implemented, for example, by an IC memory such as a RAM or a ROM, a magnetic disc such as a hard disk, an optical disc such as a CD-ROM or a DVD, or an online storage. In the example illustrated in
The storage section 500g stores, for example, a sub-server program 503, a distribution purpose second client program 504, game initial setting data 590, second virtual space control data 600, and a current date and time 900. Other types of data may be included as appropriate, of course.
The sub-server program 503 is a program read and executed by the processing section 200g to cause the processing section 200g to function as the second virtual space control section 230. In the present embodiment, since the online game is implemented by using the second virtual space 12, the sub-server program 503 also serves as a game server program for implementing a function as a game server.
The distribution purpose second client program 504 is an original of an application program provided to and executed by the user terminals 1500 accessing the sub-server systems 1100G. Note that the distribution purpose second client program 504 may be included in the distribution purpose first client program 502 (see
The game initial setting data 590 stores various types of initial setting data for the online game. For example, object definition data 592 is stored. The object definition data 592 is prepared for each object to be disposed in the second virtual space 12, and stores various types of initial setting data related to the object. For example, the object definition data 592 includes an object ID, an embodiment target flag that is set to “1” when the object is an embodiment target object, and an object model. Other types of data may be included as appropriate, of course.
The second virtual space control data 600 corresponds to control data for the game space, and includes, for example, a unique virtual space ID 601, second participating user management data 602, game progress control data 604, object control data 610, participating user's point-of-view control data 630, and embodiment point-of-view control data 640. Other types of data may be included as appropriate, of course.
The second participating user management data 602 is created each time a second participating user logs in for gameplay, and stores a user account and an object ID of a player character 4, for example.
The object control data 610 is created for each object disposed in the second virtual space 12, and stores various types of data related to the object. One piece of the object control data 610 includes, for example, an object ID 611, an object category 613 indicating whether the object is a background object or a player character 4, an embodiment target flag 615, a disposed position 617, and a posture of disposition 619. Other types of data such as motion control data may be included as appropriate, of course.
The participating user's point-of-view control data 630 is created for each participating user's point of view C2, and stores various types of data describing a latest state. The participating user's point-of-view control data 630 includes an imaging point-of-view ID 631, an applied user account 633 indicating the second participating user using the point of view, a disposed position 635 in the second virtual space 12, and a line-of-sight direction 637 (an orientation of disposition) in the second virtual space 12. Other types of data may be included as appropriate, of course.
The embodiment point-of-view control data 640 is created for each embodiment point of view C3, and stores various types of data related to the embodiment point of view C3. One piece of the embodiment point-of-view control data 640 includes an imaging point-of-view ID 641, an applied avatar ID 643 indicating the avatar 8 to which the point of view corresponds, a disposed position 645 in the second virtual space 12, and a line-of-sight direction 647 in the second virtual space 12. Other types of data may be included as appropriate, of course.
The operation input section 100 outputs operation input signals in accordance with various types of operation inputs performed by the user to the terminal processing section 200. For example, the operation input section 100 is implemented by a push switch, a joystick, a touch pad, a track ball, an acceleration sensor, a gyro, or a VR controller.
The terminal processing section 200 is implemented, for example, by a microprocessor such as a CPU or a GPU and an electronic component such as an IC memory, and performs input-and-output control for data among functional sections including the operation input section 100 and the terminal storage section 500. Various types of calculation processes are then executed based on a predetermined program and data, an operation input signal from the operation input section 100, or various types of data received from the main server system 1100P and the sub-server systems 1100G to control operation of the user terminal 1500.
The terminal processing section 200 includes a client control section 260, a timer section 280, a sound generation section 290, an image generation section 292, and a communication control section 294.
The client control section 260 performs various types of control as a client or a game client in the virtual space control system 1000 to cause the user terminal 1500 to function as a man-machine interface (MMIF). Specifically, the client control section 260 includes an operation input information provision section 261 and a display control section 262.
The operation input information provision section 261 performs control for transmitting operation input information to the main server system 1100P and the sub-server systems 1100G in accordance with an input from the operation input section 100.
The display control section 262 performs control for displaying various types of images based on data received from the main server system 1100P and the sub-server systems 1100G.
The timer section 280 utilizes a system clock to perform time measurements such as a current date and time or a limited time period.
The sound generation section 290 is implemented, for example, by a digital signal
processor (DSP) or a processor such as a sound synthesizing IC and an audio codec that makes it possible to play a sound file, generates sound signals of music, sound effects, or various types of operational sounds, and outputs the generated sound signals to the sound output section 390. The sound output section 390 is implemented by a device that outputs sound (emits sound) based on the sound signals inputted from the sound generation section 290, such as a speaker.
The image generation section 292 outputs a display control signal causing the image display section 392 to display an image based on control of the client control section 260. In the example illustrated in
The communication control section 294 executes data processing related to data communication, and implements data exchange with an external device via the communication section 394.
The communication section 394 is coupled to the network 9 to implement communication. For example, the communication section 394 is achieved by a wireless communication device, a modem, a terminal adaptor (TA), a jack for wired communication cable, or a control circuit. In the example illustrated in
The terminal storage section 500 stores programs and various types of data for causing the terminal processing section 200 to implement given functions. Furthermore, the terminal storage section 500 is used as a work area for the terminal processing section 200, and temporarily stores results of calculations executed by the terminal processing section 200 in accordance with various types of programs or input data inputted from the operation input section 100. These functions are implemented, for example, by an IC memory such as a RAM or a ROM, a magnetic disc such as a hard disk, or an optical disc such as a CD-ROM or a DVD. In the example illustrated in
Specifically, the terminal storage section 500 stores a first client program 505 (an application program), a second client program 506 (an application program), and a current date and time 900. Other types of data may be stored as appropriate, of course. For example, a token, a flag, a timer, or a counter is also stored.
The first client program 505 is a program for implementing a function as the client control section 260 for utilizing the first virtual space 11, and is acquired from the main server system 1100P.
The second client program 506 is a program for implementing a function as the client control section 260 for utilizing the second virtual space 12, and is acquired from the sub-server system 1100G.
In the processing, the main server system 1100P disposes a background object in the first virtual space 11, and starts automatic control for the first virtual space 11 (step S10). For example, automatic control for motion of a non-player character (NPC) or occurrence of an event is started.
The main server system 1100P communicates with the user terminal 1500 executing the first client program 505, and as a predetermined system login is received, determines that there is a participation requesting user (YES in step S12), and regards the user who has system-logged in as a new first participating user. Then, avatar management data 524 is created for the new first participating user, an avatar 8 of the user is disposed in the first virtual space 11, and motion control for the avatar 8 in accordance with an operation input performed by the participating user is started (step S14). Then, the user's point of view C1 of the avatar 8 that has been newly disposed is set at a predetermined position on the avatar 8 (step S16). After that, the user's point of view C1 undergoes automatic control in a linked manner to the position and the orientation of the head of the avatar 8 in accordance with motion control for the avatar 8.
Next, the main server system 1100P executes a loop A for each avatar 8, and executes first virtual space screen display processing for each avatar 8 (from step S20 to step S22).
Next, the main server system 1100P refers to the embodiment space management data 540 (see
Specifically, information of a disposed position and a line-of-sight direction of the embodiment point of view C3, which is requested to be set in the second virtual space 12 for the transmission destination, that is, the sub-server system 1100G, is transmitted. The disposed position and the line-of-sight direction of the embodiment point of view C3 in the second virtual space 12 are acquired by converting the user's point-of-view position 553 and the user's line-of-sight direction 555 (see
When an embodiment space 14 that has exited the field of view, among the embodiment spaces 14 registered in relation to the target avatar, has been newly detected (YES in step S36), on the other hand, the main server system 1100P discards the registration of the detected embodiment space 14, and requests for discarding of the embodiment point of view C3 (step S38). Specifically, the registration embodiment space management data 560 (see
Next, the main server system 1100P executes a loop B for each embodiment space registered in relation to the target avatar (from step S50 to step S76).
During the loop B, the main server system 1100P transmits a predetermined control request to the sub-server system 1100G managing the second virtual space 12 serving as the source of the registered embodiment space (the target embodiment space) regarded as the processing target (step S52).
A “control request” is for requesting the sub-server system 1100G for performing control for causing the position and the line-of-sight direction of the embodiment point of view C3 of the target avatar in the second virtual space 12 to follow, in a linked manner, a latest state of the position and the line-of-sight direction of the user's point of view C1 of the target avatar in the first virtual space 11. Together with the request, the avatar ID 551 of the target avatar, a destination position of change, and a line-of-sight direction after changed are transmitted. The destination position of change and the line-of-sight direction after changed are acquired by converting the disposed position and the line-of-sight direction of the user's point of view C1 of the target avatar in the first virtual space coordinate system based on the coordinate transformation definition data 544 of the target embodiment space.
Next, the main server system 1100P transmits a provision request for requesting provision of a list of the embodiment target objects in the target embodiment space, together with the avatar ID 551 of the target avatar, to the sub-server system 1100G managing the second virtual space 12 serving as the source of the target embodiment space, and acquires a list (step S54). The embodiment target object list 564 is updated with the received list (see
Next, the main server system 1100P deletes the virtual object control data 570 (see
During the loop C, the main server system 1100P transmits a predetermined image request for requesting provision of a captured image 18 (see
Next, the main server system 1100P performs calculation amount reduction processing on the received captured image 18 to generate an embodiment image 17 (step S64).
Now move to
Then, the main server system 1100P determines a size and a shape of each virtual object 15 of the target object, and disposes the object in the first virtual space 11 (step S68).
Specifically, the size and the shape of each virtual object 15 are set as a size and a shape of a rectangular having an identical upper-lower width and an identical left-right width to those of the received captured image 18 or the generated embodiment image 17, which are set as object shape data 576, and the virtual object 15 in accordance with the setting data is disposed at the position of disposition 578.
Then, the main server system 1100P performs texture mapping of the embodiment image 17 onto the disposed virtual object 15 (step S70), and performs the billboard processing for causing the mapping surface of the virtual object 15 to face the user's point of view C1 of the target avatar (step S72).
Note that the billboard processing may be omitted as appropriate in accordance with a relative positional relationship between, and relative orientations of, a virtual object 15 of the embodiment target object and the user's point of view C1 of a target avatar. For example, the billboard processing may be omitted for a virtual object 15 of an embodiment target object, which is away from the user's point of view C1 at a predetermined distance or longer. Furthermore, the billboard processing may be omitted when a difference between the orientation of the line-of-sight direction of the user's point of view C1 and the normal direction with respect to the mapping surface of a virtual object 15 falls within an allowable angle range. Therefore, it is possible to reduce a process load in the main server system 1100P.
After the loop C has been executed for all the embodiment target objects, and the virtual objects 15 corresponding to the embodiment target objects have been disposed in the virtual space 13 (step S74), a next registered embodiment space is regarded as a target embodiment space, and the loop B is repeated.
After the loop B has been executed for all the registered embodiment spaces (step S76), the main server system 1100P sets rendering ON for the virtual objects 15 for the target avatar, and rendering OFF for the virtual objects 15 of other avatars (step S100). Next, the main server system 1100P renders an image of the first virtual space 11, which is captured from the user's point of view C1 of the target avatar, adds information display, for example, as appropriate, to the image, and generates a first virtual space screen W1 (step S102).
Then, the user terminals 1500 for the target avatars are allowed to display the first virtual space screen W1 (step S104). The first virtual space screen display processing for the target avatars ends.
Now back to
The main server system 1100P repeats and executes step S12 to step S110.
The sub-server system 1100G refers to the game initial setting data 590, disposes background objects forming a game space in the second virtual space 12, and starts automatic control for the second virtual space 12 (step S130). Then, the sub-server system 1100G raises an embodiment target flag 615 (see
The sub-server system 1100G communicates with the user terminals 1500 executing the second client program 506, and, as a login for gameplay is received, determines that there is a participation requesting user (YES in step S140). The sub-server system 1100G regards the user who has play-logged in as a new second participating user, disposes its player character 4 in the second virtual space 12, and starts motion control in accordance with an operation input performed by the new second participating user (step S142).
Next, the sub-server system 1100G sets the participating user's point of view C2 (the imaging point of view) at a predetermined position on the newly disposed player character 4, and starts automatic control for causing a position and a line-of-sight direction of the point of view to follow a change in position or orientation of the player character 4 (step S144). The sub-server system 1100G raises the embodiment target flag 615 at the player character 4 (step S146).
In the present embodiment, since the one-on-one battle fighting game where two players fight in the second virtual space 12 is executed, the participating user's point of view C2 is shared. However, the participating user's point of view C2 is set for each player character 4 depending on a game genre. Furthermore, although, in the present embodiment, a player character 4 is set as an embodiment target object with no condition, all player characters 4 may not necessarily be set as embodiment target objects in another game such as a massively multiplayer online role playing game (MMORPG). For example, in the step, some embodiment target objects may be extracted from among the player characters 4 already disposed in the second virtual space 12.
Next, the sub-server system 1100G determines whether it has been possible to start game progress control (step S150). Since, in the present embodiment, the one-on-one battle fighting game where two players fight in the second virtual space 12 is executed, making an affirmative determination when two second participating users join (YES in step S150) starts game progress control (step S152). The sub-server system 1100G starts, as the game progress control, control for generating an image of the second virtual space 12, which is captured from the participating user's point of view C2, and for allowing the user terminals 1500 of the second participating users to display a second virtual space screen W2 (see
Note that, when a genre of a game executed in the second virtual space 12 is a multiplayer online role playing game (MORPG), for example, step S150 to step S154 may be executed, together with step S142 and step S144.
The sub-server system 1100G sets, as a setting request is received from the main server system 1100P (YES in step S160), the embodiment point of view C3 in the second virtual space 12 (step S162).
Furthermore, the sub-server system 1100G deletes and discards, as a discard request is received from the main server system 1100P (YES in step S164), the requested embodiment point of view C3 from the second virtual space 12 (step S166).
Now move to
Furthermore, the sub-server system 1100G searches and retrieves, as a provision request is received from the main server system 1100P (YES in step S184), the objects where the embodiment target flag 615 is “1 (flag is raised)” from among the objects disposed in the second virtual space 12. Then, the sub-server system 1100G transmits a list of the object IDs 611 of the objects to the main server system 1100P (step S186).
Furthermore, the sub-server system 1100G generates, as an image request is received from the main server system 1100P (YES in step S190), a captured image 18 in which the requested embodiment target object is captured from the requested embodiment point of view C3, and transmits the generated image to the main server system 1100P (step S192).
Furthermore, the sub-server system 1100G transmits, as a position request is received from the main server system 1100P (YES in step S194), the position coordinate of the requested embodiment target object to the main server system 1100P (step S196).
As a game end condition is satisfied (YES in step S200), the sub-server system 1100G executes game end processing (step S202). For example, the player character 4 of the second participating user is deleted from the second virtual space 12. In a case of a tournament type battle game, a player character 4 of a winner may not be deleted.
As described above, according to the present embodiment, it is possible to provide a technique for a new embodiment method making a situation where as if a second virtual space exists in a first virtual space.
The virtual space control system 1000 sets a virtual space 13 in the first virtual space 11, and expresses an embodiment space 14 embodying the second virtual space 12 in the virtual space 13. At that time, a virtual object 15 embodying an embodiment target object in the second virtual space 12 is only disposed in the virtual space 13. That is, it is possible to embody a situation of the second virtual space 12 in the first virtual space 11 with a far smaller calculation amount, far lesser processing steps, and a far smaller data amount, compared with a case where all objects disposed in the second virtual space 12 are replicated in the first virtual space 11, and motion control is performed similar or identical to that for the sources of replication.
In addition, aligning a disposition configuration of virtual objects 15 with a disposition configuration of embodiment target objects in the second virtual space 12, which are associated with the virtual objects, makes it possible to express, in the first virtual space 11, the situation of the second virtual space 12 in a similar manner to a live play.
Then, since captured images 18 serving as sources of embodiment images 17 with which texture mapping is to be performed onto the virtual objects 15 are generated in the sub-server systems 1100G, it is possible to suppress a process load, which is related to expression of the embodiment spaces 14, in the main server system 1100P. In addition, since embodiment images 17 are created by performing the calculation amount reduction processing on the captured images 18, it is possible to further suppress the process load in the main server system 1100P.
An example of the embodiment to which the present disclosure is applied has been described so far. Note that the present disclosure is not limited to the foregoing embodiment.
Various modifications may be made as appropriate, such as adding other elements, omitting some of the elements, or changing some of the elements.
For example, although, in the embodiment described above, the embodiment point of view C3 has been set in the second virtual space 12 for each avatar 8, the present disclosure is not limited to the embodiment. For example, as illustrated in
Although it is possible to set a number of the candidate imaging points of view C4 as appropriate, for example, approximately 100 imaging points of view may be set for imaging embodiment target objects (in the example illustrated in
An avatar's point of view C3′ in
In the modification example, the sub-server system 1100G executes a flow of embodiment point-of-view selection processing illustrated in
Furthermore, in the modification example, the sub-server system 1100G executes a flow of image provision processing illustrated in
That is, the sub-server system 1100G generates a captured image 18 of an embodiment target object requested to be imaged from the embodiment point of view C3 of the avatar 8 requested by an image request (step S220). Next, the sub-server system 1100G regards a representative point of the requested embodiment target object as a start point, (1) calculates a distance L1 from the start point to the avatar's point of view C3′ and a distance L2 from the start point to the embodiment point of view C3 (step S224), and performs enlarging-or-reducing processing on the previously generated captured image 18 based on a ratio between the distance L1 and the distance L2. When the avatar's point of view C3′ is farther than the embodiment point of view C3 from the requested embodiment target object, the captured image 18 is reduced in accordance with the ratio, or, in an opposite case, is enlarged in accordance with the ratio.
Next, the sub-server system 1100G performs projection conversion processing on the enlarged or reduced captured image 18 onto a normal surface (a surface where the line-of-sight direction serves as a normal direction) of the avatar's point of view C3′ (step S226), and transmits the enlarged or reduced captured image having undergone the processing to the main server system 1100P as the captured image 18 (step S228).
Furthermore, in the embodiment described above, although a virtual object 15 is created for each embodiment target object, such a configuration may be applied that all embodiment target objects are expressed as one virtual object 15 in a case where the participating user's point of view C2 accommodates all the embodiment target objects within its imaging range.
Furthermore, in the embodiment described above, although an example where a player character 4 serves as an embodiment target object has been described, objects for various types of effect display such as luster, explosive smoke, fog, spark, and concentrated line work may serve as embodiment target objects as appropriate.
Furthermore, although, in the embodiment described above, a plate-shaped polygon has been exemplified as a virtual object 15, the present disclosure is not limited to the embodiment. For example, a virtual object 15 may be created as a three-dimensional model having a plurality of polygons by using photogrammetry based on captured images 18 captured from the candidate imaging points of view C4. Specifically, the main server system 1100P may acquire, from the sub-server systems 1100G, captured images 18 from all the candidate imaging points of view C4 as photogrammetry raw-material images, instead of the loop C, execute photogrammetry processing on the acquired captured images 18 as raw materials, and create virtual objects 15. In this case, it is not necessary to execute the billboard processing (step S72 illustrated in
Furthermore, although an example where both a position and a line-of-sight direction are controlled in each virtual space in relation to various types of points of view (the user's point of view C1, the participating user's point of view C2, the embodiment point of view C3, and a candidate imaging point of view) has been described, the present disclosure is not limited to the embodiment. Depending on (a) main expression targets in the first virtual space 11 and the second virtual space 12, (b) content of game implemented in the second virtual space 12, (c) design of an avatar 8 and a player character 4, and (d) shape of a virtual object 15, for example, such a configuration may be applied that either a position or a line-of-sight direction is controlled. Furthermore, step S66 may be omitted for an object at a fixed position in the second virtual space 12 (for example, a background object of a certain type).
In relation to step S30 and step S36 (see
Furthermore, for control for a movement and an orientation of a virtual object 15 in accordance with a change in movement or direction of the user's point of view C1, whether only the movement is controlled, only the orientation is controlled, or both are controlled also differs depending on the content of (a) to (d) described above.
That is, in relation to step S72, it can be said that the main server system 1100P controls the position and/or the orientation of a virtual object 15 in accordance with the position and/or the orientation of the user's point of view C1 in the first virtual space 11.
Control for the embodiment point of view C3 in the second virtual space 12 is similar to those described above.
That is, whether only the position of the user's point of view C1 changes, only the line-of-sight direction changes, or both change differs depending on the content of (a) to (d) described above. Furthermore, whether only the movement of the embodiment point of view C3 is controlled, only the orientation is controlled, or both are controlled also differs depending on the content of (a) to (d) described above.
That is, it can be said that the sub-server system 1100G controls, in relation to step S182, the position and/or the orientation of the embodiment point of view C3 in the second virtual space in accordance with the position and/or the orientation of the user's point of view C1 in the first virtual space 11.
Although only some embodiments of the present disclosure have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the embodiments without materially departing from the novel teachings and advantages of this disclosure. Accordingly, all such modifications are intended to be included within scope of this disclosure.
Number | Date | Country | Kind |
---|---|---|---|
2022-056821 | Mar 2022 | JP | national |
This application is a continuation of International Patent Application No. PCT/JP2023/009092, having an international filing date of Mar. 9, 2023, which designated the United States, the entirety of which is incorporated herein by reference. Japanese Patent Application No.2022-056821 filed on Mar. 30, 2022 is also incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2023/009092 | Mar 2023 | WO |
Child | 18886023 | US |