Embodiments of the present disclosure relate to the field of computer and Internet technologies, and in particular, to picture display method, terminal device, storage medium, and program product.
At present, game applications often provide a three-dimensional virtual environment where users can control virtual characters to perform various operations, providing users with a more realistic game environment.
If a user locks a target virtual character, i.e., a “locked character”, in a three-dimensional virtual environment, the game application will control a virtual camera to observe the locked character by using a “self character,” a virtual character controlled by the user, as a visual focus, and present pictures captured by the virtual camera to the user. This, however, may easily cause the self character to block the locked character, which affects display effect of the pictures.
According to one aspect of the present disclosure, a picture display method is provided. The method is performed by a terminal device and includes: displaying a first picture frame that is obtained by using a virtual camera to capture a three-dimensional virtual environment using a virtual follower object in the three-dimensional virtual environment as a visual focus; determining a target position of the virtual follower object according to a target position of a self character and a target position of a first locked character, the first locked character being a locked target corresponding to the self character in a character-locked state; determining a target position and target orientation of the virtual camera according to the target position of the virtual follower object, a distance between the target position of the virtual camera and the target position of the virtual follower object being shorter than a distance between the target position of the virtual camera and the target position of the first locked character; performing interpolation according to the target position and target orientation of the virtual camera and an actual position and actual orientation of the virtual camera in the first picture frame, to obtain a single-frame target position and single-frame target orientation of the virtual camera in a second picture frame; and generating and displaying the second picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame.
According to another aspect of the present disclosure, a terminal device is provided and includes a processor and a memory, the memory storing a computer program, and the computer program being loaded and executed by the processor to implement a picture display method. The method includes: displaying a first picture frame that is obtained by using a virtual camera to capture a three-dimensional virtual environment using a virtual follower object in the three-dimensional virtual environment as a visual focus; determining a target position of the virtual follower object according to a target position of a self character and a target position of a first locked character, the first locked character being a locked target corresponding to the self character in a character-locked state; determining a target position and target orientation of the virtual camera according to the target position of the virtual follower object, a distance between the target position of the virtual camera and the target position of the virtual follower object being shorter than a distance between the target position of the virtual camera and the target position of the first locked character; performing interpolation according to the target position and target orientation of the virtual camera and an actual position and actual orientation of the virtual camera in the first picture frame, to obtain a single-frame target position and single-frame target orientation of the virtual camera in a second picture frame; and generating and displaying the second picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame.
According to another aspect of the present disclosure, a non-transitory computer-readable storage medium is provided for storing a computer program, the computer program being loaded and executed by a processor to implement a picture display method. The method includes: displaying a first picture frame that is obtained by using a virtual camera to capture a three-dimensional virtual environment using a virtual follower object in the three-dimensional virtual environment as a visual focus; determining a target position of the virtual follower object according to a target position of a self character and a target position of a first locked character, the first locked character being a locked target corresponding to the self character in a character-locked state; determining a target position and target orientation of the virtual camera according to the target position of the virtual follower object, a distance between the target position of the virtual camera and the target position of the virtual follower object being shorter than a distance between the target position of the virtual camera and the target position of the first locked character; performing interpolation according to the target position and target orientation of the virtual camera and an actual position and actual orientation of the virtual camera in the first picture frame, to obtain a single-frame target position and single-frame target orientation of the virtual camera in a second picture frame; and generating and displaying the second picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame.
The technical solutions provided in the embodiments of the present disclosure may include the following beneficial effects.
A virtual follower object in a three-dimensional virtual environment is used as a visual focus of a virtual camera. In a character-locked state, position information of the virtual follower object is determined based on position information of a self character and position information of a locked state, and a position and orientation of the virtual camera are updated based on the position information of the virtual follower object. During determination of the position information of the virtual follower object, both the position information of the self character and the position information of the locked character are taken into account, which avoids the locked character from being blocked by the self character, so that the determined position information of the virtual follower object is more reasonable and accurate, thus ensuring that in a picture captured by the virtual camera taking the virtual follower object as the visual focus, the self character and the locked character can be presented to a user in a more reasonable and clearer way, which improves the camera motion reasonability of the virtual camera in the character-locked state, and thus improves the display effect of a picture.
In addition, the distance between the target position of the virtual camera and the target position of the virtual follower object is kept being shorter than the distance between the target position of the virtual camera and the target position of the first locked character, so that the virtual follower object is closer to the virtual camera than the first locked character within a visual field range of the virtual camera, thereby avoiding the first locked character from blocking the visual field of the virtual camera and further improving the camera motion reasonability of the virtual camera in the character-locked state.
Reference will now be made in detail to exemplary embodiments of the disclosure, which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
Virtual environment refers to an environment displayed (or provided) when a client of an application program (such as a game application) is run on a terminal device (also referred to as a terminal). The virtual environment refers to an environment created for virtual objects to engage in activities (such as game competitions and task execution). For example, the virtual environment can be a virtual house, a virtual island, a virtual map, and the like. The virtual environment can be a simulation of the real world, a semi-simulated and semi-fictional environment, or a purely fictional environment. In the embodiments of the present disclosure, the virtual environment is three-dimensional, which is a space composed of three dimensions: length, width, and height. Therefore, it can be referred to as a “three-dimensional virtual environment”.
Virtual character refers to a character controlled by a user account in an application. A game application is taken as an example. Virtual characters refer to game characters controlled by a user account in the game application. The virtual characters can be in the form of person, animal, cartoon, or any others without limitations. In the embodiments of the present disclosure, the virtual character is also three-dimensional, so it can be referred to as a “three-dimensional virtual character”.
In different game applications, operations performed by user accounts to control the virtual characters may also vary. For example, in a shooting game application, a user account can control a virtual character to perform operations such as hitting, shooting, throwing virtual items, running, jumping, and casting skills.
Of course, in addition to the game application, other types of applications can also present virtual characters to users and provide corresponding functions for the virtual characters, for example, an Augmented Reality (AR) application, a social application, an interactive entertainment application, and any suitable applications without any limitations. In addition, for different applications, the forms of the virtual characters provided therefrom may also vary, and the corresponding functions may also vary. This can be pre-configured according to actual needs.
The terminal 10 may be an electronic device such as a mobile phone, a tablet computer, a game console, a multimedia playback device, a personal computer (PC), a vehicle-mounted terminal, and a smart TV. The terminal 10 may be installed with a client of a target application. The target application can refer to applications that can provide a three-dimensional virtual environment, such as a game application, a simulation application, and an entertainment application. Exemplarily, the game application that can provide a three-dimensional (3D) virtual environment include but is not limited to: corresponding applications such as a 3D action game (3D ACT), a 3D shooting game, and a 3D multiplayer online battle arena (MOBA).
The server 20 is configured to provide a background service for the client of the target application installed in the terminal 10. For example, the server 20 may be a background server of the above target application. The server 20 may be one server, a server cluster including a plurality of servers, or a cloud computing service center.
The terminal 10 communicates with the server 20 by using a network 30. The network 30 may be a wired network, or may be a wireless network.
Step 210: Display a first picture frame, the first picture frame being a picture obtained by using a virtual camera to capture the three-dimensional virtual environment using a virtual follower object in a three-dimensional virtual environment as a visual focus.
When presenting a content in the three-dimensional virtual environment to a user, the client will display one picture frame after another. The picture frames are images obtained by using the virtual camera to capture the three-dimensional virtual environment. For example, the first picture frame mentioned above may refer to an image obtained by using the virtual camera to capture the three-dimensional virtual environment at a current moment. The three-dimensional virtual environment may include virtual characters, for example, a virtual character (referred to as “self character” in the embodiments of the present disclosure) controlled by the user, and virtual characters controlled by other users or systems (for example, Artificial Intelligence (AI)). In some embodiments, the three-dimensional virtual environment may also include some other virtual items, for example, virtual houses, virtual vehicles, and/or virtual trees, without any limitations. In an embodiment of the present disclosure, a virtual camera technology can be used to generate picture frames. That is, the client observes the three-dimensional virtual environment with the virtual camera serving as an observation viewing angle and captures the three-dimensional virtual environment in real time (or at a fixed interval) to obtain picture frames. Contents of the picture frames change as the position of the virtual camera changes.
In an embodiment of the present disclosure, the virtual camera takes a virtual follower object in a three-dimensional virtual environment as a visual focus. The virtual follower object is an invisible object. For example, the virtual follower object is not a virtual character or virtual item, nor does it have a shape. It can be regarded as a point in the three-dimensional virtual environment. The virtual follower object in the three-dimensional virtual environment may undergo corresponding position changes as the position of the self character (optionally including other virtual characters) changes. The virtual camera may follow the virtual follower object to move (for example, position and orientation), thus capturing things around the virtual follower object in the three-dimensional virtual environment and presenting them to the user in picture frames.
Step 220: Determine a target position of a virtual follower object according to a target position of a self character and a target position of a first locked character. The first locked character is a locked target corresponding to the self character in a character-locked state.
The character-locked state refers to a state where the self character takes another virtual character as a locked target. The another virtual character may be a virtual character controlled by another user or the system. In the character-locked state, the position and orientation of the virtual camera need to change as the positions of the self character and the locked character change, so that the self character and the locked character can be contained in the picture frames captured by the virtual camera as far as possible, and the user can watch the self character and the locked character in the picture frames.
In an embodiment of the present disclosure, because the visual focus of the virtual camera is the virtual follower object, the position and orientation of the virtual camera will change as the position of the virtual follower object changes, while the position of the virtual follower object will change as the positions of the self character and the locked character change. The locked character is the locked target corresponding to the self character. In some embodiments, the locked character will be marked and displayed, and operations corresponding to the self character will be applied to the locked character. In some embodiments, the first locked character mentioned above can be any one or more of other virtual characters locked by the self character.
In some embodiments, the locked target of the self character serving as the first locked character is take as an example. An update process of the virtual camera in the character-locked state is explained. Step 220 can also include the following exemplary substeps:
1: Determine a first target position of the virtual follower object on a target straight line by taking the target position of the self character as a follow target. The target straight line is perpendicular to a connecting line between the target position of the self character and the target position of the first locked character.
In the character-locked state, on the one hand, the virtual follower object still needs to take the self character as the follow target and move with the movement of the self character. On the other hand, in order to present the currently locked first locked character in the picture frames, the target position of the virtual follower object also needs to consider the target position of the first locked character.
In an embodiment of the present disclosure, the target position can be understood as a planned position, referring to a desired position a position expected to move to. For example, the target position of the self character refers to a position desired by the self character or to which the self character expects to move (for example, a next frame corresponding to the first picture frame), and the target position of the first locked character refers to a position desired by the first locked character or to which the first locked character expects to move. The target position of the self character can be determined according to a control operation performed by the user on the self character. The target position of the first locked character can be determined according to a control operation performed by the system or another user on the first locked character.
As shown in
2: Determine the first target position of the virtual follower object as the target position of the virtual follower object when the first target position of the virtual follower object satisfies a condition.
3: Adjust the first target position of the virtual follower object when the first target position of the virtual follower object does not satisfy the condition, to obtain the target position of the virtual follower object.
In an embodiment of the present disclosure, after the first target position of the virtual follower object is determined, it is necessary to determine whether the first target position satisfies the condition. If the condition is satisfied, the first target position is determined as the target position of the virtual follower object. In addition, if the condition is not satisfied, it is necessary to adjust the first target position, to obtain the target position of the virtual follower object, and the target position obtained by adjustment satisfies the above condition. The setting of the condition is to ensure that the target position of the virtual follower object is at a relatively appropriate position. When the virtual camera takes the virtual follower object as the visual focus for capturing, the self character and the first locked character can be both included in a picture, and the positions of the self character and the first locked character do not overlap, thereby improving the display effect of a picture.
In some embodiments, the above condition includes: an offset distance of the first target position of the virtual follower object from the target position of the self character is less than or equal to a maximum offset. The first target position of the virtual follower object is adjusted based on the maximum offset when the offset distance of the first target position of the virtual follower object from the target position of the self character is greater than the maximum offset, to obtain the target position of the virtual follower object. An offset distance of the target position of the virtual follower object from the target position of the self character is less than or equal to the maximum offset. In some embodiments, the maximum offset may be a value greater than 0. In some embodiments, the maximum offset may be a fixed value or a value dynamically determined based on the position of the virtual camera. For example, as shown in
In some embodiments, the above condition further includes: an offset distance of the first target position of the virtual follower object from the target position of the self character is greater than a minimum offset amount. The first target position of the virtual follower object is adjusted based on the minimum offset amount when the offset distance of the first target position of the virtual follower object from the target position of the self character is less than or equal to the minimum offset amount, to obtain the target position of the virtual follower object. The offset distance of the target position of the virtual follower object from the target position of the self character is greater than the minimum offset amount. In some embodiments, a value of the minimum offset amount can be 0 or a value greater than 0 without any limitations. In addition, the minimum offset amount is less than the maximum offset mentioned above. In some embodiments, the minimum offset amount may be a fixed value or a value dynamically determined based on the position of the virtual camera. For example, as shown in
In some embodiments, the above condition further includes: the first target position of the virtual follower object is located within a backside angle region of the self character. The first target position of the virtual follower object is adjusted based on the backside angle region of the self character when the first target position of the virtual follower object is located beyond the backside angle region of the self character, to obtain the target position of the virtual follower object. The target position of the virtual follower object is located within the backside angle region of the self character. The backside angle region of the self character refers to a backside angle region facing an opposite direction to the first locked character, taking a straight line passing through the target position of the self character and the target position of the first locked character as a central axis. In the present disclosure, there is no limitation on a size of the backside angle region, for example, it can be 90 degrees, 120 degrees, 150 degrees, 180 degrees, or any suitable degrees, which can be set according to actual needs.
Step 230: Determine a target position and target orientation of the virtual camera according to the target position of the virtual follower object, a distance between the target position of the virtual camera and the target position of the virtual follower object being shorter than a distance between the target position of the virtual camera and the target position of the first locked character.
After the target position of the virtual follower object is determined, the target position and target orientation of the virtual camera can be determined according to the target position of the virtual follower object. In an embodiment of the present disclosure, the distance between the target position of the virtual camera and the target position of the virtual follower object is kept being shorter than the distance between the target position of the virtual camera and the target position of the first locked character, so that the virtual follower object is closer to the virtual camera than the first locked character within a visual field range of the virtual camera, thereby avoiding the first locked character from blocking the visual field of the virtual camera and further improving the camera motion reasonability of the virtual camera.
In some embodiments, step 230 includes following exemplary substeps:
1: Determine, according to the target position of the virtual follower object, a rotating track where the virtual camera is located, where a plane where the rotating track is parallel to a reference plane of the three-dimensional virtual environment, and a central axis of the rotating track passes through the target position of the virtual follower object.
The rotating track in the present disclosure refers to a moving track of the virtual camera. The virtual camera can automatically follow the virtual object to move on the rotating track. The rotating track may be circular, elliptical, or in any suitable shape. As shown in
2: Determine the target position and target orientation of the virtual camera on the rotating track according to the target position of the virtual follower object and the target position of the first locked character.
The target position of the virtual camera refers to a position theoretically desired by the virtual camera or to which the virtual camera theoretically expects to move. The target orientation of the virtual camera refers to an orientation theoretically desired or expected by the virtual camera. A single-frame target position of the virtual camera below refers to a position actually desired by the virtual camera or to which the virtual camera actually expects to move, which is used for transitioning the virtual camera from a current position to the target position. A single-frame target orientation of the virtual camera below refers to an actual or expected orientation of the virtual camera, which is used for transitioning the virtual camera from a current orientation to the target orientation. If the target position of the first locked character and the target position of the virtual follower object are calibrated in the reference plane of the three-dimensional virtual environment, a projection point of the target position of the virtual camera in the reference plane of the three-dimensional virtual environment is located on a straight line where the target position of the first locked character and the target position of the virtual follower object are located, and the target position of the virtual follower object is located between the projection point mentioned above and the target position of the first locked character.
As shown in
Step 240: Perform interpolation according to the target position and target orientation of the virtual camera and an actual position and actual orientation of the virtual camera in the first picture frame, to obtain a single-frame target position and single-frame target orientation of the virtual camera in a second picture frame.
After the target position of the virtual camera is determined, in combination with the actual position of the virtual camera in the first picture frame, the single-frame target position of the virtual camera in the second picture frame can be obtained by using a first interpolation algorithm. The goal of the first interpolation algorithm is to gradually (or smoothly) move the virtual camera to the target position.
Similarly, after the target orientation of the virtual camera is determined, in combination with the actual orientation of the virtual camera in the first picture frame, the single-frame target orientation of the virtual camera in the second picture frame can be obtained by using a second interpolation algorithm. The goal of the second interpolation algorithm is to gradually (or smoothly) move the virtual camera to the target orientation.
In some embodiments, the process of determining the single-frame target position of the virtual camera in the second picture frame includes: determining a first interpolation coefficient according to a first distance, the first distance being a distance between the first locked character and the self character, and the first interpolation coefficient being used for determining an adjustment amount of a position of the virtual camera; and determining the single-frame target position of the virtual camera in the second picture frame according to the target position of the virtual camera, the actual position of the virtual camera in the first picture frame, and the first interpolation coefficient.
In some embodiments, the first interpolation coefficient and the first distance are in a positive correlation. For example,
In some embodiments, the process of determining the single-frame target orientation of the virtual camera in the second picture frame includes: determining a second interpolation coefficient according to a second distance, the second distance being a distance between the first locked character and a central axis in a picture, and the second interpolation coefficient being used for determining an adjustment amount of an orientation of the virtual camera; and determining the single-frame target orientation of the virtual camera in the second picture frame according to the target orientation of the virtual camera, the actual orientation of the virtual camera in the first picture frame, and the second interpolation coefficient.
In some embodiments, the second interpolation coefficient and the second distance are in a positive correlation. For example,
Step 250: Generate and display the second picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame.
After the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame are determined, the client can control the virtual camera to be placed according to the above single-frame target position and single-frame target orientation, take pictures of the three-dimensional virtual environment by taking the virtual follower object in the three-dimensional virtual environment as the visual focus, to obtain the second picture frame, and display the second picture frame.
In some embodiments, the second picture frame may be a next picture frame of the first picture frame. The second picture frame can be displayed after the first picture frame is displayed. For example, if the first picture frame is a picture frame at current time, the second picture frame is a picture frame at next time of the current time. The single-frame target position mentioned above is a true position of the virtual camera at the next time, and the single-frame target orientation mentioned above is a true orientation of the virtual camera at the next time.
In addition, the embodiments of the present disclosure take a process of switching from the first picture frame to the second picture frame as an example to explain a picture switching process in the character-locked state. It is understood that a process of switching between any two picture frames in the character-locked state can be achieved according to the process of switching from the first picture frame to the second picture frame described above.
As such, a virtual follower object in a three-dimensional virtual environment is used as a visual focus of a virtual camera. In a character-locked state, position information of the virtual follower object is determined based on position information of a self character and position information of a locked state, and a position and orientation of the virtual camera are updated based on the position information of the virtual follower object. During determination of the position information of the virtual follower object, both the position information of the self character and the position information of the locked character are taken into account, which avoids the locked character from being blocked by the self character, so that the determined position information of the virtual follower object is more reasonable and accurate, thus ensuring that in a picture captured by the virtual camera taking the virtual follower object as the visual focus, the self character and the locked character can be presented to a user in a more reasonable and clearer way, which improves the camera motion reasonability of the virtual camera in the character-locked state, and thus improves the display effect of a picture.
In addition, the distance between the target position of the virtual camera and the target position of the virtual follower object is kept being shorter than the distance between the target position of the virtual camera and the target position of the first locked character, so that the virtual follower object is closer to the virtual camera than the first locked character within a visual field range of the virtual camera, thereby avoiding the first locked character from blocking the visual field of the virtual camera and further improving the camera motion reasonability of the virtual camera in the character-locked state.
In some embodiments, as shown in
Step 1110: Control, in the character-locked state in response to a visual field adjustment operation performed on the self character, the virtual camera to rotate around the virtual follower object.
In the character-locked state, there is a hypothesis that the first locked character is currently locked. The user can control the virtual camera to rotate around the virtual follower object on the rotating track of the virtual camera by performing the visual field adjustment operation performed on the self character, so as to switch the locked character. The rotating track of the virtual camera can refer to the explanation of the embodiments above, and will not be repeated here. The visual field adjustment operation is used for adjusting an observation visual field of the virtual camera. For example, a rotating direction and rotating speed of the virtual camera can be determined according to visual field adjustment operation. For example, the visual field adjustment operation is a sliding operation of a finger of the user in a screen (a non-key region). The rotating direction of the virtual camera can be determined according to a direction of the sliding operation, and the rotating speed of the virtual camera can be determined according to a sliding speed or a sliding distance of the sliding operation.
In some embodiments, in the character-locked state, in response to the visual field adjustment operation performed on the self character, the client performs switching from the character-locked state to a non-character-locked state. In the non-character-locked state, the virtual camera is controlled to rotate around the virtual follower object according to the visual field adjustment operation.
In some embodiments, the character-locked state and non-character-locked state have corresponding virtual cameras. For ease of description, the virtual camera used in the character-locked state is referred to as a first virtual camera, and the virtual camera used in the non-character-locked state is referred to as a second virtual camera. In the character-locked state, the first virtual camera is in a working state, and the second virtual camera is in a non-working state. The client can update the position and orientation of the first virtual camera according to the method flow described in the embodiment in
Step 1120: Determine, in the rotating process, a pre-locked character in a three-dimensional virtual environment, and display a third picture frame, the third picture frame displaying the pre-locked character and a pre-locked mark corresponding to the pre-locked character.
During the rotation of the virtual camera, the first locked character is no longer locked, and the client is in the non-character-locked state. The non-character-locked state at this time can also be referred to as the pre-locked state. In the pre-locked state, the client determines the pre-locked character in the three-dimensional virtual environment according to the positions of the various virtual characters in the three-dimensional virtual environment, as well as the position and orientation of the virtual camera. For example, the visual focus (namely, the virtual follower object) of the virtual camera is determined based on the position and orientation of the virtual camera, a virtual object closest to the visual focus is determined as the pre-locked character. The pre-locked character refers to a virtual character that is about to be locked, or a virtual character that may be possibly locked. At the same time, in the pre-locked state, if there is a pre-locked character, a pre-locked mark corresponding to the pre-locked character will be displayed in a picture frame displayed by the client to remind the user which virtual character is currently pre-locked.
Step 1130: Determine the pre-locked character as a second locked character in response to a lock confirmation operation performed on the pre-locked character, and display a fourth picture frame, the fourth picture frame displaying the second locked character and a locked mark corresponding to the second locked character.
The lock confirmation operation refers to a triggering operation performed by the user to determine the pre-locked character as a locked character. Still for example, the above visual field adjustment operation is the sliding operation of the finger of the user in the screen. If the finger of the user leaves the screen, the sliding operation is ended, and an operation that ends the sliding operation is determined as the lock confirmation operation. The client can determine the pre-locked character corresponding to the end of the sliding operation as a second locked character.
In some embodiments, after determining the pre-locked character as the second locked character, the client may also perform switching from the non-character-locked state (or, the pre-locked state) to the character-locked state. In the character-locked state, the position and orientation of the virtual camera will be updated according to the method flow described in the embodiment in
In some embodiments, if the character-locked state and non-character-locked state have corresponding virtual cameras, when the client performs switching from the non-character-locked state (or the pre-locked state) to the character-locked state, the client will also control the currently used virtual camera to be switched from the second virtual camera to the first virtual camera, and then the position and orientation of the first virtual camera will be updated according to the method flow described in the embodiment in
In addition, a locked mark is used for distinguishing a locked character from other non-locked characters. The locked mark can be different from the pre-locked mark, allowing the user to distinguish whether a virtual character is a pre-locked character or a locked character based on the different marks.
Exemplarily, as shown in
In an embodiment of the present disclosure, in the character-locked state, the switching of the locked character is also achieved by supporting the adjustment of the visual field of the self character. In addition, in the switching process, the client automatically predicts the pre-locked character and displays the pre-locked mark corresponding to the pre-locked character, so that the user can intuitively and clearly watch which virtual character is currently in the pre-locked state, making it convenient for the user to switch the locked character accurately and efficiently.
In some embodiments, as shown in
Step 1310: Update, in a non-character-locked state, the position of the virtual follower object in an interpolation manner by taking the self character as a follow target, to obtain a single-frame target position of the virtual follower object in a fifth picture frame.
In the non-character-locked state, the visual focus of the virtual camera is still the virtual follower object. At this time, there is no locked character, the position update of the virtual follower object only needs to consider changes of the position of the self character, without considering changes of the position of the locked character. In some embodiments, in the non-character-locked state, the single-frame target position of the virtual follower object is determined by using a third interpolation algorithm, and the goal of the third interpolation algorithm is to make the virtual follower object to smoothly follow the self character.
In some embodiments, in the non-character-locked state, a third interpolation coefficient is determined according to a third distance. The third distance refers to a distance between the self character and the virtual follower object. The third interpolation coefficient is used for determining an adjustment amount of the position of the virtual follower object. The third interpolation coefficient and the third distance are in a positive correlation. The single-frame target position of the virtual follower object in the fifth picture frame is determined according to the actual position of the self character in the first picture frame, the actual position of the virtual follower object in the first picture frame, and the third interpolation coefficient. Exemplarily, the third interpolation coefficient can also be a value in [0,1]. A distance between the actual position of the self character in the first picture frame and the actual position of the virtual follower object in the first picture frame. The distance is multiplied by the third interpolation coefficient to obtain the adjustment amount of the position, and then the actual position of the virtual follower object in the first picture frame is translated towards the self character by the above adjustment amount of the position, to obtain the single-frame target position of the virtual follower object in the fifth picture frame. The fifth picture frame may be a next picture frame of the first picture frame. In the above manner, if the self character is farther from the virtual follower object, the virtual follower object has a higher follow speed. If the self character is closer to the virtual follower object, the virtual follower object has a lower follow speed. Due to the fact that the virtual follower object slowly follows the self character in the three-dimensional virtual environment, even if the self character has an irregular displacement or a significant misalignment from other virtual characters, the virtual camera may still translate smoothly, which improves the camera motion reasonability of the virtual camera in the non-character-locked state.
Step 1320: Determine a single-frame target position of the virtual camera in the fifth picture frame according to the single-frame target position of the virtual follower object in the fifth picture frame.
After the single-frame target position of the virtual follower object in the fifth picture frame is obtained, the single-frame target position of the virtual camera in the fifth picture frame can be determined according to an existing positional relationship between the virtual camera and the virtual follower object.
Exemplarily, as shown in
Step 1330: Determine, when no visual field adjustment operation performed on the self character is obtained, the actual orientation of the virtual camera in the first picture frame as a single-frame target orientation of the virtual camera in the fifth picture frame.
In the non-character-locked state, if the user does not perform the visual field adjustment operation on the self character to adjust the orientation of the visual field, the client maintains the orientation of the virtual camera in the previous frame.
Step 1340: Adjust, when a visual field adjustment operation performed on the self character is obtained, the actual orientation of the virtual camera in the first picture frame according to the visual field adjustment operation, to obtain a single-frame target orientation of the virtual camera in the fifth picture frame.
In the non-character-locked state, if the user performs the visual field adjustment operation on the self character to adjust the orientation of the visual field, the client needs to update the orientation of the virtual camera. In some embodiments, the client updates the orientation of the virtual camera according to the visual field adjustment operation. For example, the visual field adjustment operation is a sliding operation on the screen. The client may determine an adjustment direction and adjustment angle of the orientation of the virtual camera according to information such as a direction and a displacement of the sliding operation, and then determine a target orientation in the next frame in combination with the orientation in the previous frame.
Step 1350: Generate and display the fifth picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the fifth picture frame.
After the single-frame target position and single-frame target orientation of the virtual camera in the fifth picture frame are determined, the client can control the virtual camera to be placed according to the above single-frame target position and single-frame target orientation, take pictures of the three-dimensional virtual environment by taking the virtual follower object in the three-dimensional virtual environment as the visual focus, to obtain the fifth picture frame, and display the fifth picture frame.
In an embodiment of the present disclosure, the virtual follower object is also controlled to follow the self character to move smoothly in the non-character-locked state, and then the virtual camera takes pictures by taking the virtual follower object as the visual focus. Because the virtual follower object slowly follows the self character in the three-dimensional virtual environment, even if the self character has an irregular displacement or a significant misalignment from other virtual characters, the virtual camera may still translate smoothly to avoid dramatic shaking or the like of contents in the pictures, thereby improving the watching experience of the user.
The technical solution of the present disclosure will be described below with reference to
As shown in
In the character-locked state, if the user performs a visual field adjustment operation, the client controls the virtual camera to rotate around the virtual follower object to determine a pre-locked character. In this way, the update of the virtual camera is completed in a pre-locked state.
In a non-character-locked state, the position of the virtual follower object is updated by interpolation by taking the self character as a follow target. Then, the client determines whether the user performs a visual field adjustment operation. If the user performs a visual field adjustment operation, the client determines a single-frame target orientation according to the visual field adjustment operation. If the user does not perform a visual field adjustment operation, the client determines a current actual orientation of the virtual camera as a single-frame target orientation. In this way, the update of the virtual camera is completed in the non-character-locked state.
During running of the client, the position and orientation of the virtual camera need to be updated at each frame. Then, the three-dimensional virtual environment is captured based on the updated position and orientation by taking the virtual follower object as the visual focus, to obtain picture frames displayed to the user.
Step 1610: Display a first picture frame, the first picture frame being a picture obtained by using a virtual camera to capture the three-dimensional virtual environment taking a virtual follower object in a three-dimensional virtual environment as a visual focus.
Step 1620: Display the second picture frame based on a single-frame target position and single-frame target orientation of the virtual camera in the second picture frame in response to movement of at least one of a self character and a first locked character, the single-frame target position and the single-frame target orientation being determined according to a target position and target orientation of the virtual camera, the target position and target orientation of the virtual camera being determined according to a target position of the virtual follower object, a distance between the target position of the virtual camera and the target position of the virtual follower object being shorter than a distance between the target position of the virtual camera and a target position of the first locked character, and the first locked character being a locked target corresponding to the self character in a character-locked state.
In the character-locked state, because both the self character and the first locked character may move, the position and orientation of the virtual camera need to be adaptively adjusted according to changes of the positions of the self character and the first locked character, ensuring that the self character and the locked character are contained in picture frames captured by the virtual camera as far as possible.
In an exemplary embodiment, step 1620 may include following exemplary substeps:
1: Determine a target position of the self character and a target position of the first locked character in response to the movement of at least one of the self character and the first locked character.
2: Determine a target position of the virtual follower object according to the target position of the self character and the target position of the first locked character.
3: Determine a target position and target orientation of the virtual camera according to the target position of the virtual follower object.
4: Perform interpolation according to the target position and target orientation of the virtual camera and an actual position and actual orientation of the virtual camera in the first picture frame, to obtain a single-frame target position and single-frame target orientation of the virtual camera in a second picture frame.
5: Generate and display the second picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame.
In some embodiments, this embodiment of the present disclosure can also support switching the locked character in the character-locked state. The method further includes:
In some embodiments, an update process of the virtual camera in a non-character-locked state may include following steps:
For undescribed details in this embodiment, please refer to the explanation of those details in other method embodiments described above.
In summary, according to the technical solution provided in this embodiment of the present disclosure, a virtual follower object in a three-dimensional virtual environment is used as a visual focus of a virtual camera. In a character-locked state, position information of the virtual follower object is determined based on position information of a self character and position information of a locked state, and a position and orientation of the virtual camera are updated based on the position information of the virtual follower object. During determination of the position information of the virtual follower object, both the position information of the self character and the position information of the locked character are taken into account, which avoids the locked character from being blocked by the self character, so that the determined position information of the virtual follower object is more reasonable and accurate, thus ensuring that in a picture captured by the virtual camera taking the virtual follower object as the visual focus, the camera motion reasonability of the virtual camera in the character-locked state is improved, and the self character and the locked character can be presented to a user in a more reasonable and clearer way, which improves the display effect of a picture.
In addition, the distance between the target position of the virtual camera and the target position of the virtual follower object is kept being shorter than the distance between the target position of the virtual camera and the target position of the first locked character, so that the virtual follower object is closer to the virtual camera than the first locked character within a visual field range of the virtual camera, thereby avoiding the first locked character from blocking the visual field of the virtual camera and further improving the camera motion reasonability of the virtual camera in the character-locked state.
The following describes an apparatus embodiment of the present disclosure, which can be configured to implement the method embodiment of the present disclosure. For details not disclosed in the apparatus embodiment of the present disclosure, refer to the method embodiment of the present disclosure.
The picture display module 1710 is configured to display a first picture frame, the first picture frame being a picture obtained by using a virtual camera to capture the three-dimensional virtual environment taking a virtual follower object in a three-dimensional virtual environment as a visual focus.
The object position determining module 1720 is configured to determine a target position of the virtual follower object according to a target position of a self character and a target position of a first locked character, the first locked character being a locked target corresponding to the self character in a character-locked state.
The camera position determining module 1730 is configured to determine a target position and target orientation of the virtual camera according to the target position of the virtual follower object, a distance between the target position of the virtual camera and the target position of the virtual follower object being shorter than a distance between the target position of the virtual camera and the target position of the first locked character.
The single-frame position determining module 1740 is configured to perform interpolation according to the target position and target orientation of the virtual camera and an actual position and actual orientation of the virtual camera in the first picture frame, to obtain a single-frame target position and single-frame target orientation of the virtual camera in a second picture frame.
The picture display module 1710 is further configured to generate and display the second picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame.
In some embodiments, the single-frame position determining module 1740 is configured to:
In some embodiments, the single-frame position determining module 1740 is further configured to:
In some embodiments, the first interpolation coefficient and the first distance are in a positive correlation, and the second interpolation coefficient and the second distance are in a positive correlation.
In some embodiments, the object position determining module 1720 is configured to:
In some embodiments, the condition includes: an offset distance of the first target position of the virtual follower object from the target position of the self character is less than or equal to a maximum offset. The object position determining module 1720 is further configured to: adjust the first target position of the virtual follower object based on the maximum offset when the offset distance of the first target position of the virtual follower object from the target position of the self character is greater than the maximum offset, to obtain the target position of the virtual follower object; and an offset distance of the target position of the virtual follower object from the target position of the self character is less than or equal to the maximum offset.
In some embodiments, the condition includes: the first target position of the virtual follower object is located within a backside angle region of the self character. The object position determining module 1720 is further configured to: adjust the first target position of the virtual follower object based on the backside angle region of the self character when the first target position of the virtual follower object is located beyond the backside angle region of the self character, to obtain the target position of the virtual follower object; and the target position of the virtual follower object is located within the backside angle region of the self character.
In some embodiments, the camera position determining module 1730 is configured to control, in the character-locked state in response to a visual field adjustment operation performed on the self character, the virtual camera to rotate around the virtual follower object.
The picture display module 1710 is configured to: determine, in the rotating process, a pre-locked character in the three-dimensional virtual environment, and display a third picture frame, the third picture frame displaying the pre-locked character and a pre-locked mark corresponding to the pre-locked character.
The picture display module 1710 is further configured to: determine the pre-locked character as a second locked character in response to a lock confirmation operation performed on the pre-locked character, and display a fourth picture frame, the fourth picture frame displaying the second locked character and a locked mark corresponding to the second locked character.
In some embodiments, the object position determining module 1720 is further configured to update, in a non-character-locked state, the position of the virtual follower object in an interpolation manner by taking the self character as a follow target, to obtain a single-frame target position of the virtual follower object in a fifth picture frame.
The single-frame position determining module 1740 is further configured to: determine a single-frame target position of the virtual camera in the fifth picture frame according to the single-frame target position of the virtual follower object in the fifth picture frame; determine, when no visual field adjustment operation performed on the self character is obtained, the actual orientation of the virtual camera in the first picture frame as a single-frame target orientation of the virtual camera in the fifth picture frame; and adjust, when a visual field adjustment operation performed on the self character is obtained, the actual orientation of the virtual camera in the first picture frame according to the visual field adjustment operation, to obtain a single-frame target orientation of the virtual camera in the fifth picture frame.
The picture display module 1710 is further configured to generate and display the fifth picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the fifth picture frame.
In some embodiments, the object position determining module 1720 is further configured to:
As such, a virtual follower object in a three-dimensional virtual environment is used as a visual focus of a virtual camera. In a character-locked state, position information of the virtual follower object is determined based on position information of a self character and position information of a locked state, and a position and orientation of the virtual camera are updated based on the position information of the virtual follower object. During determination of the position information of the virtual follower object, both the position information of the self character and the position information of the locked character are taken into account, which avoids the locked character from being blocked by the self character, so that the determined position information of the virtual follower object is more reasonable and accurate, thus ensuring that in a picture captured by the virtual camera taking the virtual follower object as the visual focus, the self character and the locked character can be presented to a user in a more reasonable and clearer way, which improves the camera motion reasonability of the virtual camera in the character-locked state, and thus improves the display effect of a picture.
In addition, the distance between the target position of the virtual camera and the target position of the virtual follower object is kept being shorter than the distance between the target position of the virtual camera and the target position of the first locked character, so that the virtual follower object is closer to the virtual camera than the first locked character within a visual field range of the virtual camera, thereby avoiding the first locked character from blocking the visual field of the virtual camera and further improving the camera motion reasonability of the virtual camera in the character-locked state.
Another exemplary embodiment of the present disclosure further provides a picture display apparatus. As shown in
The picture display module 1710 is configured to display a first picture frame, the first picture frame being a picture obtained by using a virtual camera to capture the three-dimensional virtual environment with a virtual follower object in a three-dimensional virtual environment as a visual focus.
The picture display module 1710 is further configured to display the second picture frame based on a single-frame target position and single-frame target orientation of the virtual camera in the second picture frame in response to movement of at least one of a self character and a first locked character, the single-frame target position and the single-frame target orientation being determined according to a target position and target orientation of the virtual camera, the target position and target orientation of the virtual camera being determined according to a target position of the virtual follower object, a distance between the target position of the virtual camera and the target position of the virtual follower object being shorter than a distance between the target position of the virtual camera and a target position of the first locked character, and the first locked character being a locked target corresponding to the self character in a character-locked state.
In some embodiments, as shown in
The object position determining module 1720 is configured to: determine a target position of the self character and the target position of the first locked character in response to the movement of at least one of the self character and the first locked character; and determine the target position of the virtual follower object according to the target position of the self character and the target position of the first locked character.
The camera position determining module 1730 is configured to determine a target position and target orientation of the virtual camera according to the target position of the virtual follower object.
The single-frame position determining module 1740 is configured to perform interpolation according to the target position and target orientation of the virtual camera and an actual position and actual orientation of the virtual camera in the first picture frame, to obtain a single-frame target position and single-frame target orientation of the virtual camera in a second picture frame.
The picture display module 1710 is further configured to generate and display the second picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame.
In some embodiments, the camera position determining module 1730 is further configured to control, in the character-locked state in response to a visual field adjustment operation performed on the self character, the virtual camera to rotate around the virtual follower object.
The picture display module 1710 is further configured to: determine, in the rotating process, a pre-locked character in the three-dimensional virtual environment, and display a third picture frame, the third picture frame displaying the pre-locked character and a pre-locked mark corresponding to the pre-locked character.
The picture display module 1710 is further configured to: determine the pre-locked character as a second locked character in response to a lock confirmation operation performed on the pre-locked character, and display a fourth picture frame, the fourth picture frame displaying the second locked character and a locked mark corresponding to the second locked character.
In some embodiments, the object position determining module 1720 is further configured to update, in a non-character-locked state, the position of the virtual follower object by taking the self character as a follow target, to obtain a single-frame target position of the virtual follower object in a fifth picture frame.
The picture display module 1710 is further configured to display the fifth picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the fifth picture frame, the single-frame target position of the virtual camera in the fifth picture frame being determined according to the single-frame target position of the virtual follower object in the fifth picture frame, and the single-frame target orientation of the virtual camera in the fifth picture frame being determined according to an actual orientation of the virtual camera in the first picture frame.
As such, a virtual follower object in a three-dimensional virtual environment is used as a visual focus of a virtual camera. In a character-locked state, position information of the virtual follower object is determined based on position information of a self character and position information of a locked state, and a position and orientation of the virtual camera are updated based on the position information of the virtual follower object. During determination of the position information of the virtual follower object, both the position information of the self character and the position information of the locked character are taken into account, which avoids the locked character from being blocked by the self character, so that the determined position information of the virtual follower object is more reasonable and accurate, thus ensuring that in a picture captured by the virtual camera taking the virtual follower object as the visual focus, the self character and the locked character can be presented to a user in a more reasonable and clearer way, which improves the camera motion reasonability of the virtual camera in the character-locked state, and thus improves the display effect of a picture.
In addition, the distance between the target position of the virtual camera and the target position of the virtual follower object is kept being shorter than the distance between the target position of the virtual camera and the target position of the first locked character, so that the virtual follower object is closer to the virtual camera than the first locked character within a visual field range of the virtual camera, thereby avoiding the first locked character from blocking the visual field of the virtual camera and further improving the camera motion reasonability of the virtual camera in the character-locked state.
It is noted that, When the apparatus provided in the foregoing embodiment implements its functions, only division of the foregoing function modules is used as an example for description. In practical applications, the functions may be allocated to and completed by different function modules according to requirements. That is, an internal structure of the device is divided into different function modules, to complete all or some of the functions described above. In addition, the apparatus and method embodiments provided in the above embodiments belong to the same idea, and specific implementation processes of the apparatus refer to the details in the method embodiments, and are not described here again.
the terminal device 1800 usually includes: a processor 1801 and a memory 1802.
The processor 1801 may include one or more processing cores, for example, a 4-core processor or an 8-core processor. The processor 1801 may be implemented in at least one hardware form of Digital Signal Processing (DSP), a Field-Programmable Gate Array (FPGA), a Programmable Logic Array (PLA). The processor 1801 may also include a main processor and a coprocessor. The main processor is a processor configured to process data in an awake state, and is also referred to as a central processing unit (CPU). The coprocessor is a low power consumption processor configured to process the data in a standby state. In some embodiments, the processor 1801 may be integrated with a graphics processing unit (GPU). The GPU is configured to render and draw content that needs to be displayed on a display screen. In some embodiments, the processor 1801 may further include an artificial intelligence (AI) processor. The AI processor is configured to process computing operations related to machine learning.
The memory 1802 may include one or more computer-readable storage media. The computer-readable storage medium may be non-transient. The memory 1802 may further include a high-speed random access memory and a nonvolatile memory, for example, one or more disk storage devices or flash storage devices. In some embodiments, a non-transitory computer-readable storage medium in the memory 1802 is configured to store a computer program. The computer program is configured to be executed by one or more processors to implement the above picture display methods.
In some embodiments, the terminal device 1800 may further include: a peripheral interface 1803 and at least one peripheral. The processor 1801, the memory 1802, and the peripheral interface 1803 may be connected through a bus or a signal cable. Each peripheral may be connected to the peripheral interface 1803 through a bus, a signal cable, or a circuit board. Specifically, the peripheral includes: at least one of a radio frequency circuit 1804, a display screen 1805, an audio circuit 1806, and a power supply 1807.
A person skilled in the art may understand that the structure shown in
In an exemplary embodiment, a computer-readable storage medium is further provided. The storage medium stores a computer program. The computer program, when executed by a processor, implements the above picture display methods.
In some embodiments, the computer-readable storage medium may include: a Read-Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), an optical disk, or the like. The RAM may include a resistance random access memory (ReRAM) and a dynamic random access memory (DRAM).
In an exemplary embodiment, a computer program product or a computer program is further provided. The computer program product or the computer program includes computer instructions stored in a computer-readable storage medium. A processor of a terminal device reads the computer instructions from the computer-readable storage medium and executes the computer instructions, causing the terminal device to implement the above picture display method.
It is noted that, Information (including but not limited to object device information, object personal information, and any suitable information), data (including but not limited to data for analysis, stored data, displayed data and any suitable data), and signals involved in the present disclosure are all authorized by an object or fully authorized by all parties, and the collection, use and processing of the relevant data need to comply with the relevant laws, regulations and standards of the relevant countries and regions. For example, the user account and the three-dimensional virtual environment involved in the present disclosure are all obtained under full authorization.
“A plurality of” mentioned herein means two or more. “And/or” describes an association relation for associated objects and represents that three relationships may exist. For example, A and/or B may represent: only A exists, both A and B exist, and only B exists. The character “/” usually indicates an “or” relation between associated objects. In addition, the step numbers described in the present disclosure merely exemplarily show a possible execution sequence of the steps. In some other embodiments, the steps may not be performed according to the number sequence. For example, two steps with different numbers may be performed simultaneously, or two steps with different numbers may be performed according to a sequence contrary to the sequence shown in the figure. This is not limited in the embodiments of the present disclosure.
As used herein, the term module (and other similar terms such as submodule, unit, subunit, etc.) in this disclosure may refer to a software module, a hardware module, or a combination thereof. A software module (e.g., computer program) may be developed using a computer programming language. A hardware module may be implemented using processing circuitry and/or memory. Each module can be implemented using one or more processors (or processors and memory). Likewise, a processor (or processors and memory) can be used to implement one or more modules. Moreover, each module can be part of an overall module that includes the functionalities of the module.
The foregoing descriptions are merely exemplary embodiments of the present disclosure, but are not intended to limit the claimed invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure shall fall within the protection scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202210003178.6 | Jan 2022 | CN | national |
This application is a continuation application of PCT Patent Application No. PCT/CN2022/127196, filed on Oct. 25, 2022, which claims priority to Chinese Patent Application No. 202210003178.6, filed on Jan. 04, 2022, all of which is incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2022/127196 | Oct 2022 | WO |
Child | 18340676 | US |