This application relates to the field of internet technologies, and in particular, to a data processing method and apparatus in a virtual scene, an electronic device, a computer-readable storage medium, and a computer program product.
In a related shooting game, to consider both computing performance of an electronic device and a presentation effect in a game scene, a three-dimensional model globally presented to a player is generally preset according to particular rules. A presented three-dimensional model and content of an information card are determined based on data when the player enter the game. Flexibility of an editing operation for the three-dimensional model is poor, and utilization of display resources and processing resources of the device is low. Because interaction information for the player in the game is presented mostly through a static text, a sense of fragmentation between the interaction information and a character model corresponding to the player is strong, so that an entire presentation effect is poor.
The embodiments of this application provide a data processing method and apparatus in a virtual scene, an electronic device, a computer-readable storage medium, and a computer program product. Flexibility of an editing operation, can further ensure integrity of a three-dimensional model during presentation can be improved, and a presentation effect of the three-dimensional model can be improved.
An embodiment of this application provides a method for processing data in a virtual scene, the method including:
An embodiment of this application provides an electronic device, including:
An embodiment of this application provides a non-transitory computer-readable storage medium, having computer-executable instructions stored therein. When the computer-executable instructions are executed by a processor of an electronic device, the electronic device is caused to perform the method for processing data in a virtual scene provided in the embodiments of this application.
The embodiments of this application include the following beneficial effects.
By applying the embodiments of this application, an editing interface is presented based on a received editing instruction for a three-dimensional model, and editing operations on a character model, a border model, and an information component for the three-dimensional model are completed based on the editing interface, to obtain a target three-dimensional model of a virtual object. Therefore, a hardware processing resource of an electronic device can be fully utilized, on-demand editing for the three-dimensional model is implemented, and flexibility of an editing operation for the three-dimensional model is improved. The target three-dimensional model is presented at a presentation position of the three-dimensional model in a virtual scene, so that an edited three-dimensional model is presented. An appearance of the character model of the three-dimensional model is consistent with an appearance of the virtual object, and the three-dimensional model carries the border model of the character model, so that at least one information component is configured on the border model, and the information component carries object information of the virtual object. Therefore, display resources of the electronic device are fully utilized, and a presentation effect of the three-dimensional model is improved.
To make objectives, technical solutions, and advantages of this application clearer, the following describes this application in detail with reference to the accompanying drawings. The described embodiments are not to be construed as limitation on this application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of this application without creative efforts shall fall within the protection scope of this application.
“Some embodiments” involved in the following description describes a subset of all possible embodiments. However, “some embodiments” may be same or different subsets of all the possible embodiments, and may be combined with each other when there is no conflict.
In the following description, the terms “first”, “second”, and “third” are merely intended to distinguish between similar objects and do not indicate a specific sequence of the objects. A specific order or sequence of the “first”, “second”, and “third” may be interchanged if permitted, so that the embodiments of this application described herein may be implemented in a sequence other than the sequence illustrated or described herein.
Unless otherwise defined, meanings of all technical and scientific terms used in this specification are the same as those usually understood by a person skilled in the art to which this application belongs. Terms used in this specification are merely intended to describe objectives of the embodiments of this application, but are not intended to limit this application.
Before the embodiments of this application are described in detail, terms involved in the embodiments of this application are described, and the nouns and the terms involved in the embodiments of this application are applicable to the following explanations.
(1) A client is an application that runs in a terminal and that is configured to provide various services, such as an instant communication client and a video playing client.
(2) “In response to” is configured for representing a condition or a status on which an executed operation depends, and when a dependent condition or status is met, one or more executed operations may be in real time or may have a set delay. There is no limitation on a sequence in which operations are performed without particular description.
(3) The virtual scene is a virtual scene displayed when the application runs on the terminal. The virtual scene may be a simulated environment of the real world, a semi-simulated and semi-fictional virtual environment, or a purely fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene. Dimensions of the virtual scene are not limited in the embodiments of this application. For example, the virtual scene may include the sky, the land, and the ocean. The land may include environmental elements such as deserts and cities. A user may control a virtual object to perform an action in the virtual scene, and the action includes but is not limited to: any one of adjusting a body posture, crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, or throwing. The virtual scene may be a virtual scene displayed from a first-person perspective (for example, a virtual object in a game is played in a perspective of a player), may be a virtual scene displayed from a third-person perspective (for example, the game is played in a perspective of the player chasing the virtual object in the game), or may be a virtual scene displayed from an aerial view. The perspectives may be switched in any manner.
That the virtual scene is displayed from the first-person perspective is used as an example. A virtual scene displayed in a human-machine interaction interface may include: based on a viewing position and a field of view angle of the virtual object in a full virtual scene, determining a field of view of the virtual object, and displaying a part of the virtual scene located in the field of view in the full virtual scene, that is, the displayed virtual scene may be the part of the virtual scene relative to a panoramic virtual scene. Because the first-person perspective is the most impactful viewing perspective for the user, in this case, immersive perception for the user during operation can be implemented. That the virtual scene is displayed from the aerial view is used as an example. A virtual scene displayed in a human-computer interaction interface may include: in response to a zooming operation for the panoramic virtual scene, displaying a part of the virtual scene corresponding to the zooming operation in the human-computer interaction interface, that is, the displayed virtual scene may be the part of the virtual scene relative to a panoramic virtual scene. This can improve operability of the user during the operation, and can improve human-computer interaction efficiency.
(4) Virtual objects are images of various people and objects that may be interacted with in the virtual scene, or movable objects in the virtual scene. The movable object may be a virtual character, a virtual animal, an animation character, or the like, for example, a person, an animal, a plant, an oil drum, a wall, or a stone displayed in the virtual scene. The virtual object may be a virtual avatar representing the user in the virtual scene. The virtual scene may include a plurality of virtual objects. Each virtual object has a shape and a volume of the virtual object in the virtual scene, and occupies a part of space in the virtual scene.
During actual application, the virtual object may be a user role controlled through operations on the client, may be artificial intelligence (AI) set in a fight in the virtual scene through training, or may be a non-player character (NPC) set in interaction in the virtual scene. For example, the virtual object may be a virtual character that interacts adversarially in the virtual scene. For example, a quantity of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined based on a quantity of clients participating in the interaction.
A shooting game is used as an example. The user may control the virtual object to fall freely, glide, open a parachute to fall, or the like in the sky of the virtual scene; to run, jump, crawl, bend and move forward, or the like in the land; and may also control the virtual object to swim, float, dive, or the like in the ocean. It is clear that, the user may also control the virtual object to ride a vehicle-type virtual item to move in the virtual scene, for example, the vehicle-type virtual item may be a virtual car, a virtual aircraft, or a virtual yacht. The user may also control the virtual object to interact adversarially with another virtual object by using an attack-type virtual item, for example, the virtual item may be a virtual mecha, a virtual tank, or a virtual fighter. The foregoing scene is used as an example for description, and this is not limited in the embodiments of this application.
(5) Scene data represents various features displayed by objects in the virtual scene during interaction, for example, may include a position of the object in the virtual scene. It is clear that, different types of features may be included based on a type of virtual scene. For example, in the virtual scene of the game, the scene data may include waiting time of various functions configured in the virtual scene (depending on a quantity of times of a same function that may be used in a specific time), and may also represent attribute points of various status of game characters, including, for example, a health point (also referred to as a “red point”), a magic point (also referred to as a “blue point”), and a status point.
(6) A three-dimensional model, also referred to as a 3D model, is a 3D virtual character model displayed to the player in the game that is fully consistent with a character appearance of the player character. The model is not only stationary, but may perform various actions based on settings of the player.
(7) A three-dimensional user interface (3D UI) means that content (including but not limited to content of a text, a number, an image, and the like) originally displayed on a two-dimensional (2D) user interface (UI) is combined in the 3D model for displaying in a technical manner.
Based on explanations of the terms involved in the embodiments of this application, the following describes a data processing system in a virtual scene provided in the embodiments of this application. Referring to
The terminal (for example, the terminal 400-1 or the terminal 400-2) is configured to: receive, based on a view interface, a triggering operation of entering the virtual scene, and send an obtaining request of scene data of the virtual scene to the server 200.
The server 200 is configured to: receive the obtaining request of the scene data, and return the scene data of the virtual scene to the terminal in response to the obtaining request.
The terminal (for example, the terminal 400-1 or the terminal 400-2) is configured to: receive the scene data of the virtual scene, render an image of the virtual scene based on the obtained scene data, and display the image of the virtual scene of a target (virtual) object on a graphical interface (for example, a graphical interface 410-1 and a graphical interface 410-2 are shown); in the virtual scene, the terminal receives an editing instruction triggered by a target object (namely, a target user) for a three-dimensional model (of the target object); displays an editing interface configured for editing the three-dimensional model of the virtual object, the virtual object corresponding to the target object, the three-dimensional model including a character model consistent with an appearance of the virtual object and a border model surrounding the character model, at least one information component being configured on the border model, and the information component carrying object information of the virtual object; determines an edited target three-dimensional model of the virtual object based on the editing interface; and presents the target three-dimensional model of the virtual object at a presentation position of the three-dimensional model in the virtual scene. All content displayed on the image of the virtual scene is obtained by rendering the returned scene data of the virtual scene.
During actual application, the server 200 may be an independent physical server, or a server cluster or a distributed system composed of a plurality of physical servers, or may alternatively be a cloud server that provides a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a content delivery network (CDN), and a basic cloud computing service such as big data or an artificial intelligence platform. The terminal (for example, the terminal 400-1 or the terminal 400-2) may be a smartphone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart television, and a smart watch, but this is not limited thereto. The terminal (for example, the terminal 400-1 or the terminal 400-2) and the server 200 may be connected directly or indirectly in a wired or wireless communication protocol. This is not limited in this application.
In an actual implementation, the terminal (including the terminal 400-1 and the terminal 400-2) installs and runs an application supporting the virtual scene. The application may be any one of a first-person shooting game (FPS), a third-person shooting game, a driving game having a steering operation as a main action, a multiplayer online battle arena game (MOBA), a two-dimensional (2D) game application, a three-dimensional (3D) game application, a virtual reality application, a three-dimensional map program, a simulation program, or a multiplayer gunfight survival game. The application may alternatively be a console application, such as a console 3D game program.
An electronic game scene is used as an example scenario. A user may operate on the terminal in advance, and the terminal may download a game configuration file of an electronic game after detecting the operation of the user. The game configuration file may include an application, interface display data, virtual scene data, or the like of the electronic game, so that the user can invoke the game configuration file when logging in to the electronic game on the terminal, to render and display an interface of the electronic game. The user may perform a touch operation on the terminal. After detecting the touch operation, the terminal may determine game data corresponding to the touch operation, and render and display the game scene. The game data may include the virtual scene data, action data of the virtual object in the virtual scene, and the like.
During actual application, the terminal (including the terminal 400-1 and the terminal 400-2) receives, based on a view interface, a triggering operation of entering the virtual scene, and sends an obtaining request of scene data of the virtual scene to the server 200. The server 200 receives the obtaining request of the scene data, and returns the scene data of the virtual scene to the terminal in response to the obtaining request. The terminal receives the scene data of the virtual scene, renders an image of the virtual scene based on the scene data, to display the virtual object in an interface of the virtual scene, and when a presentation condition of the three-dimensional model of the virtual object is satisfied, presents the target three-dimensional model (the edited three-dimensional model of the virtual object based on the editing interface) of the virtual object at the presentation position in the virtual scene.
The embodiments of this application may alternatively be implemented by using a cloud technology. The cloud technology is a hosting technology that unifies a series of resources such as hardware, software, and a network in a wide area network or a local area network, to implement data computing, storage, processing, and sharing.
A general term for a network technology, an information technology, an integration technology, a management platform technology, and an and power application technology that are applied based on a cloud computing business model. The cloud technology can form resource pools and be used on demand. A cloud computing technology is an important support. A background service of a technical network system needs a large quantity of computing and storage resources.
Referring to
The processor 510 may be an integrated circuit chip having a signal processing ability, for example, a general processor, a digital signal processor (DSP), or another programmable logic device, a discrete gate or a transistor logic device, or a discrete hardware component, where the general processor may be a microprocessor, any conventional processor, or the like.
The user interface 530 includes one or more output apparatuses 531 that can display media content, including a speaker and/or a visual display screen. The user interface 530 also includes one or more input apparatuses 532, including a user interface component that facilitate user input such as a keyboard, a mouse, a microphone, a touchscreen display screen, a camera, another input button, or a control.
The memory 550 may be a removable memory, a non-removable memory, or a combination of a removable memory and a non-removable memory. Exemplary hardware devices include a solid state memory, a hard drive, an optical disk drive, and the like. The memory 550 includes one or more storage devices physically remote from the processor 510.
The memory 550 may be a volatile memory or a non-volatile memory, or may include both a volatile memory and a non-volatile memory. The non-volatile memory may be a read only memory (ROM), or the volatile memory may be a random access memory (RAM). The memory 550 described in the embodiments of this application is intended to include, but is not limited to, memories of any suitable type.
In some embodiments, the memory 550 can store data to support various operations. An example of the data includes a program, a module, a data structure, or a subset or a superset of the data. The following is an example for description.
An operating system 551 includes a system program configured to handle various basic system services and perform a hardware related task, for example, a framework layer, a core library layer, or a driver layer, used for implementing various basic services and processing a task based on hardware.
A network communication module 552 is configured to reach another computing device through one or more (wired or wireless) network interfaces 520. Exemplary network interfaces 520 include Bluetooth, wireless fidelity (Wi-Fi), a universal serial bus (USB), and the like.
A display module 553 is configured to present information (such as a user interface configured for operating a peripheral device and displaying content and information) through one or more output apparatuses 531 (such as a display screen and a speaker) associated with a user interface 530.
An input processing module 554 is configured to: detect input or interaction of the user from one or more input apparatuses 532, and translate the detected input or interaction.
In some embodiments, a data processing apparatus in the virtual scene provided in the embodiments of this application may be implemented by using software.
In some other embodiments, the data processing apparatus in the virtual scene provided in the embodiments of this application may be implemented in a combination of software and hardware. For example, the data processing apparatus in the virtual scene provided in the embodiments of this application may be implemented by using a hardware-coded processor, where the hardware-decoding processor is programed to perform the data processing apparatus in the virtual scene provided in the embodiments of this application. For example, the hardware-decoding processor may use one or more application specific integrated circuits (ASICs), DSPs, programmable logic devices (PLDs), complex programmable logic devices (CPLDs), field-programmable gate arrays (FPGAs), or other electronic elements.
Based on the foregoing explanation of the data processing system and the electronic device in the virtual scene provided in the embodiments of this application, the following describes the data processing method in the virtual scene provided in the embodiments of this application. In some embodiments, the data processing method in the virtual scene provided in the embodiments of this application may be implemented by the server or the terminal separately, or may be implemented by the server and the terminal collaboratively. In some embodiments, the terminal or the server may implement the data processing method in the virtual scene provided in the embodiments of this application by running a computer program. For example, the computer program may be an original program or a software module in the operating system; may be a native application (APP), to be specific, a program that needs to be installed in the operating system to run, such as a client supporting the virtual scene, for example, a game APP; or may be a mini program, to be specific, a program that only needs to be downloaded to a browser environment to run; or may alternatively be a mini program that can be embedded into any APP. In conclusion, the computer program may be any form of application, module, or plug-in.
During actual application, the data processing method in the virtual scene provided in the embodiments of this application may be implemented by the server or the terminal separately, or may be implemented by the server and the terminal collaboratively. That the terminal performs implementation separately is used as an example to describe the data processing method in the virtual scene provided in the embodiments of this application. Referring to
Operation 101: A terminal displays, in response to an editing instruction for a three-dimensional model and triggered by a target object, an editing interface configured for editing the three-dimensional model of a virtual object.
The three-dimensional model may be a three-dimensional image model. The three-dimensional model includes at least one of a character model consistent with an appearance of the virtual object and a border model surrounding the character model, in other words, includes the character model consistent with the appearance of the virtual object and/or the border model surrounding the character model. At least one information component is configured on the border model, and the information component carries object information of the virtual object.
The virtual object corresponds to a target object, to be specific, an appearance of the three-dimensional model is consistent with the appearance of the virtual object, in other words, the three-dimensional model has the same appearance as the virtual object. When the virtual scene is a game scene, correspondingly, the virtual object is a game character of a player (user) in the game scene. The player may edit, based on an editing interface, the three-dimensional model having the same appearance as the game character of the player, such as a virtual sculpture or a virtual statue having the same appearance as the game character of the player. The border model may be referred to as a border configured to carry or load an edited three-dimensional model, and may also referred to as a bounding box. The object information is information of the virtual object in the virtual scene, such as a name, a game mastery value, and a game performance value.
During actual implementation, the terminal is deployed with an application client supporting the virtual scene, such as a virtual scene-specific application client or an application server having a function of the virtual scene. When the virtual scene is the game scene, the virtual scene-specific application client is a game client, and the application server having the function of the virtual scene may be an instant communication client, an education client, a video playing client, or the like. The terminal runs the application client based on an activation operation for the application client, displays an activation interface (“start game”) of the virtual scene (such as a shooting game scene), and presents the three-dimensional model corresponding to the virtual object controlled by the player on the activation interface. Alternatively, in an interface of the virtual scene (that is, during game playing), the terminal may also present the three-dimensional model of the virtual object at a preset presentation position in the virtual scene based on actual demand.
During actual application, the virtual object may be a virtual image in the virtual scene corresponding to a user account currently logged in to in the application client. For example, the virtual object may be the virtual object controlled by the user when entering the shooting game. It is clear that, other virtual objects or interaction objects may also be included in the virtual scene, and may be controlled by other users or robot programs. The player is represented by the virtual object in the virtual scene. In addition, when a presentation condition for the three-dimensional model is satisfied, the three-dimensional model corresponding to the player may also be presented. The three-dimensional model may be the virtual sculpture or the virtual statue (to be specific, a three-dimensional image model in the virtual scene, not a two-dimensional image presented in the interface of the virtual scene) of the player in the virtual scene. An appearance of the character model in the virtual sculpture (statue) is consistent with an appearance of the virtual object controlled by the player in the virtual scene. The virtual sculpture (statue) may also include the border model, to be specific, a border configured to carry the virtual sculpture (statue). The border model also carries the information component configured to display the object information of the virtual object. The information component may be configured to present object information of a target form of the virtual object. The target form may include at least one of a three-dimensional text form or a three-dimensional image form. Both the border model and the information component are models having three-dimensional structures in the virtual scene, not two-dimensional images.
For example, referring to
A triggering manner of an editing instruction for the three-dimensional model is described. In some embodiments, a terminal may receive the editing instruction for the three-dimensional model in the following manner: The terminal displays an editing control for the three-dimensional model in an interface of the virtual object in a virtual scene. The terminal receives the editing instruction in response to a triggering operation for the editing control, and displays an editing interface configured for editing the three-dimensional model in the interface of the virtual scene.
During actual implementation, an application client of the virtual scene may also provide a function for editing the three-dimensional model to a player. When receiving the editing instruction of the player for the three-dimensional model corresponding to the player, the terminal displays, in response to a current editing instruction, the editing interface configured for editing the three-dimensional model. In the editing interface, the three-dimensional model may be set customarily.
In some embodiments, the terminal may receive the editing instruction for the three-dimensional model in the following manner: The terminal displays the interface of the virtual scene; when the editing condition of the three-dimensional model is satisfied, displays editing prompt information in the interface of the virtual scene, the editing prompt information being configured to prompt that a target object has a permission to edit the three-dimensional model; and receives the editing instruction triggered by the three-dimensional model based on the editing prompt information. In other words, there is an editing condition corresponding to editing of the three-dimensional model, and not all objects can perform the editing of the three-dimensional model. The editing condition is set to improve enthusiasm of a player object, enhance human-computer interaction, and improve utilization of hardware resources of an electronic device. The target object is prompted to enable the target object to timely learn that the target object has the permission to edit the three-dimensional model. Therefore, the three-dimensional model is edited, and utilization of display resources of the electronic device is improved.
During actual implementation, the terminal displays the interface (for example, in a shooting game, displays the interface of the virtual scene during a game) of the virtual scene. The virtual object performs an interaction operation with another object in the virtual scene, and has corresponding interaction information. A shooting game including a first half and a second half is used as an example. During the game, the interaction information (such as a health point, a magic point, a status point, or a kill count) of the virtual object changes constantly. The editing condition for a three-dimensional image may be set based on the interaction information of the virtual object. When the editing condition is satisfied, the editing prompt information configured to prompt that the target object has the permission to edit the three-dimensional model is directly displayed in the interface of the virtual scene. The editing prompt information may be displayed in a form of a floating layer (pop-up window), to be specific, when the editing condition is satisfied, in the interface of the virtual scene, the floating layer including the editing prompt information is displayed, and the floating layer may also include a confirm function option and a cancel function option. The terminal receives the editing instruction for the three-dimensional model in response to a triggering operation for the confirm function option.
For example, referring to
An editing condition for the three-dimensional model is described. In some embodiments, the terminal may determine that the editing condition for the three-dimensional model is satisfied in the following manner: When at least one of the following is satisfied, the terminal determines that the editing condition for the three-dimensional model is satisfied: interaction performance of a virtual object in a virtual scene is obtained, and the interaction performance reaches a performance threshold; or a virtual resource of the virtual object in the virtual scene is obtained, and a virtual resource size reaches a resource size threshold.
During actual implementation, the editing condition for a player to editing a three-dimensional model of the player may be that the interaction performance of the virtual object in the virtual scene reaches the performance threshold, or the virtual resource size of the virtual object in the virtual scene reaches the resource size threshold. The virtual resource may be a resource such as a virtual item, a virtual substance, or a virtual vehicle purchased by the player. When a total virtual value of virtual resources of the player reaches a value threshold, it may represent that the editing condition configured for editing the three-dimensional model of the players is satisfied.
Operation 102: Construct a target three-dimensional model of the virtual object on an editing interface.
The terminal constructs the target three-dimensional model of the virtual object on the editing interface, in other words, the terminal determines, based on the editing interface, the target three-dimensional model of the virtual object edited by a target object.
During actual implementation, based on the editing interface presented by the terminal for editing the three-dimensional model, the player can perform editing operation for the three-dimensional model of the virtual object in the editing interface, the editing operation including at least an editing operation for a character model of the three-dimensional model, an editing operation for a border model of the three-dimensional model, and an editing operation for each information component of the three-dimensional model. After the editing operation is performed, the target three-dimensional model of the virtual object may be obtained. In some embodiments, a performing sequence of the editing operation on the character model, the editing operation on the border model, and the editing operation on the information component may be set freely based on a preference or habit of the user.
In some embodiments, the terminal may implement the editing operation for the character model of the three-dimensional model in the following manner: The terminal receives a character editing instruction for the character model based on the editing interface, the character instruction being configured to instruct to edit character content of the character model, and the character content including at least one of the following: a material, a posture, or an item; in response to the character editing instruction, displays at least one candidate character content corresponding to the character content; and in response to a selection instruction for the candidate character content, determines selected candidate character content as target character content of the character model, to obtain a target three-dimensional model having the target character content.
During actual implementation, the terminal displays the editing interface for editing the three-dimensional model in response to the editing instruction for the three-dimensional model. Because the editing interface is configured for editing each part (a part such as the character model, the border model, or the information component) of the three-dimensional model, an editing control corresponding to each part of the three-dimensional model may be displayed in the editing interface, including an editing control corresponding to the character model, an editing control corresponding to the border model, or an editing control corresponding to the information component. The terminal receives an editing instruction for a related part of the three-dimensional model triggered by a triggering operation of the player for each editing control. If the player triggers the editing control corresponding to the character model, the terminal receives the character editing instruction of the character model for the three-dimensional model, the character editing instruction being configured to instruct the player to edit the character content for the character model. The character content that may be edited for the character model includes the material, the posture, and the item. The material is a character material (such as a gold material, a silver material, or a diamond material) of the character model. The posture is a target operation performed by the character model when the three-dimensional model is displayed (also referred to as entered). The item is a virtual item (such as a hand-held shooting item and a hand-held throwing item) carried by the character model when the three-dimensional model is displayed. If the player triggers the editing control corresponding to the border model, the terminal receives a border editing instruction of the border model for the three-dimensional model. The editing operation for the border model may include modifying a border shape, a border material, or the like of the border model. If the player triggers the editing control corresponding to the information component, the terminal receives the editing instruction for the information component. The terminal displays, in response to the edit instruction of each part of the three-dimensional model, at least one candidate content corresponding to each part: displaying, in response to the character editing instruction, at least one candidate character content corresponding to the character content; displaying, in response to the border editing instruction, at least one candidate border corresponding to the border model; or displaying, in response to the editing instruction of the information component, at least one candidate object information related to the virtual object. Finally, the terminal controls, in response to the selection operation for the candidate content, the three-dimensional model to have corresponding target (candidate) content.
For example, referring to
A triggering manner for a character editing instruction is described. In some embodiments, a terminal may receive the character editing instruction in the following manner: The terminal displays a content editing control of the character model in the editing interface, the content editing control including a material control configured for editing a material of the character model, a posture control configured for editing a posture of the character model, and an item control configured for editing an item of the character model; and receives the character editing instruction for the character model in response to a triggering operation for the content editing control. In other words, the editing control may be set for a material, a posture, or an item of the character model respectively, so that a player may independently edit each dimension of the character model based on the editing control. Therefore, editing of each dimension of the character model is decoupled, and editing efficiency for each dimension is improved.
During actual implementation, because an editing operation for the character model in the three-dimensional model may include an editing operation of “character content-material”, an editing operation of “character content-posture”, and an editing operation of “character content-item”, the player may trigger a corresponding content editing control for different character content, and trigger a character editing instruction corresponding to the corresponding content. The content editing control may include the material control configured for editing the material of the character model, the posture control configured for editing the posture of the character model, and the item control configured for editing the item of the character model. The terminal may receive, in response to a triggering operation for at least one content editing control of the player, the character editing instruction for editing the corresponding character content. The terminal may receive a character editing instruction triggered for the material control, a character editing instruction triggered for the posture control, and a character editing instruction triggered for the item control.
Taking the foregoing example, referring to
In some embodiments, when a preview area of the character model is included in an editing interface, a terminal may receive the character editing instruction for the character model of a three-dimensional model in the following manner: The terminal displays a preview image of the character model in the preview area of the character model; receives the character editing instruction in response to a triggering operation for a target part in the preview image, the character editing instruction being configured to instruct to edit character content corresponding to the target part, and different parts in the preview image corresponding to different character content.
During actual implementation, the editing interface configured for editing the three-dimensional model of a virtual object may include the preview area for previewing the character model, and display the preview image of the three-dimensional model in the preview area. During actual application, the preview area presents the preview image in a shape consistent with an appearance of the three-dimensional model, and different parts of the preview image corresponds to different parts of the three-dimensional model. When the editing interface is opened, a current three-dimensional model of the virtual object (an unedited three-dimensional model) may be displayed in the preview area. The three-dimensional model may be previewed in a form of a two-dimensional image in the preview area, or the three-dimensional model may be previewed directly. An edited three-dimensional model displayed in the preview area is consistent with a target three-dimensional model presented at a presentation position of the virtual scene. In this case, display resources of an electronic device are fully utilized, so that the player may preview the edited three-dimensional model through the preview area in real-time in an editing process of the three-dimensional model, to adjust edited content based on a previewed three-dimensional model.
For example, referring to
In some embodiments, the terminal may display at least one candidate character content corresponding to character content in the following manner: when the character content includes a posture, displaying a plurality of candidate postures; in response to a selection instruction for at least two candidate postures, determining a selected candidate posture as a target posture of the character model; correspondingly, the terminal presents, at a presentation position of the three-dimensional model in the virtual scene, a process in which the target three-dimensional model of the virtual object performs the target posture in sequence.
During actual implementation, if the character editing instruction received by the terminal is triggered based on a posture control, the plurality of candidate postures are displayed in a displaying interface, and at least two candidate postures may be selected from the plurality of candidate postures as the target postures of the character model. In other words, the character model may correspond to one posture or a plurality of postures. When the character model corresponds to one posture, when an edited three-dimensional model is displayed, and when the three-dimensional model is entered (shown at first time), a process of the posture executed by the character model of the edited three-dimensional model is presented. When the character model corresponds to the plurality of postures, a selection sequence when the plurality of postures are selected may be used as an execution sequence of the plurality of postures. When the edited three-dimensional model is displayed, and when the three-dimensional model is entered (shown at first time), based on the foregoing execution sequence, a process of the plurality of postures executed by the character model of the edited three-dimensional model is presented in sequence.
For example, referring to
In some embodiments, when a preview area of the character model is included in an editing interface, a terminal may display a preview image of the character model in the following manner: In response to a selection instruction for candidate character content displayed in the editing interface, the terminal displays, in the preview area of the character model, a preview image of a character model having target character content.
During actual implementation, when the preview area is included in the editing interface, the terminal receives a character editing instruction corresponding to different character content, and displays at least one candidate character content corresponding to current character content. The player selects target character content from the at least one candidate character content, and displays the character model having the target character content in the preview area. In this case, when a target object has a plurality of characters in the virtual scene, when a three-dimensional model is edited, the plurality of characters are presented for a user to autonomously select a to-be-edited character.
For example, referring to
In some embodiments, the terminal may edit a border model of a three-dimensional model in the following manner: The terminal receives a border editing instruction for the border model based on the editing interface, the border editing instruction being configured to instruct to edit the border model; displays at least one candidate border model in response to the border editing instruction; and determines a selected candidate border model as a target candidate border model in response to a selection instruction for the candidate border model, to obtain a target three-dimensional model having the target candidate border model.
During actual implementation, because the three-dimensional model further includes the border model, in the editing interface, the border model may also be edited, and an editing process is as follows: The terminal receives the border editing instruction for the border model, displays at least one candidate border model in the editing interface, selects the target border model from a plurality of candidate border models, and controls a current border model of the three-dimensional model to be changed to a selected target border model. When the preview area is included in the editing interface, the three-dimensional model having the target border model may also be previewed in the preview area.
For example, referring to
In some embodiments, the terminal may receive the border editing instruction for the border model in the following manner: The terminal displays, in the editing interface, the border editing control corresponding to the border model; and receives the border editing instruction for the border model in response to a triggering operation for a border editing control.
During actual implementation, the border editing control corresponding to the border model may be displayed in the editing interface, the player triggers (such as “click”) the border editing control. The terminal receives the border editing instruction for the border model, and performs the editing operation of the border model for the three-dimensional model based on the border editing instruction.
For example, referring to
In some embodiments, when a preview area of the border model is included in the editing interface, the terminal may preview a selected border model in the following manner: The terminal displays at least one candidate border model in response to a received border editing instruction; displays a selected candidate border model in a preview area of the border model in response to a selection instruction for the candidate border model.
During actual implementation, in the editing interface displayed by the terminal, a preview area configured for previewing the border model may be included. The terminal may directly display at least one candidate three-dimensional border model for a current three-dimensional model in the editing interface in response to the received border editing instruction. The terminal receives a selection operation of a user for the candidate three-dimensional border model, and may display a selected candidate three-dimensional border model in the preview area of the border model.
For example, referring to
In some embodiments, after the terminal displays the selected candidate border model in the preview area of the border model, the terminal may display an information component of the three-dimensional model in the following manner: The terminal displays at least one addition bit of the information component on the candidate border model in the preview area of the border model; displays at least one object information of a virtual object in response to a triggering operation for the addition bit; and displays an information component corresponding to the object information on the addition bit in response to a selection operation for the object information.
During actual implementation, the three-dimensional model of the virtual object may further include at least one information component, the information component being carried on the border model, the information component being configured for displaying the object information of the virtual object, and the information component also being three-dimensional. When the preview area is included in the editing interface, the preview area may further include at least one addition bit for previewing the information component, and each information component in the three-dimensional model may find a corresponding addition bit in the preview area. Each addition bit in an idle state has an addition event and each addition bit in an occupied state has a delete event. The addition event is that the addition bit may be clicked to receive an addition instruction triggered for the information component. The delete event is a delete instruction that may be received and that is for the information component when the addition bit has the information component (or an image of the information component). In addition, to increase the information component, a quantity of addition bits in the preview area is greater than or equal to a quantity of information component in the three-dimensional model. For example, the three-dimensional model before editing includes three information components, and in an editing process, a quantity of addition bits corresponding to the information component is greater than or equal to three in the preview area. In this case, the quantity of information component in an edited three-dimensional model may be increased.
For example, referring to
In some embodiments, after the terminal displays a selected candidate border model in the preview area of the border model, the terminal may display the information component of the three-dimensional model in the following manner: The terminal displays an editing control configured for editing the information component in the editing interface; displays at least one object information of a virtual object in response to a triggering operation for the editing control; and in response to a selection operation for the object information, determines an information component corresponding to the object information as the information component of the border model in a target three-dimensional model.
During actual implementation, the editing interface for the three-dimensional model further includes the editing control configured for editing the information component. The terminal receives a triggering operation of the player for the editing control, displays a plurality of types of object information of the virtual object, and selects target object information, to generate an information component corresponding to the target object information as an information component carried by the border model in the target three-dimensional model. The terminal may switch the object information represented in a form of a two-dimensional text or an image into the information component represented in a form of a three-dimensional text or the image, and carry the information component on the border model. A carrying manner may be that the information component is mounted on the border model, may be that the information component is attached to the border model, or the like. If an application client is applied to different text languages (Chinese, English, Korean, and the like), that is, the player comes from different countries, the object information in a form of a text in the information component may be switched into a target language corresponding to the player. For example, for a player A using Chinese, object information in a form of Chinese is displayed in the information component, and for a player B using Korean, object information in a form of Korean is displayed in the information component.
For example, referring to
During actual implementation, the terminal obtains an operation parameter of a pressing operation in response to the pressing operation for the addition bit in the preview area. The operation parameter includes at least one of press duration and a pressure value. For example, when the press duration reaches to a duration threshold, or the pressure value reaches to a pressure threshold, the addition bit is controlled to be in a levitation state. The terminal may custom adjust, in response to a moving operation for the addition bit that is in the levitation state, a position of a current addition bit related to a border model in a preview area of the border model. The terminal controls, in response to a release operation for the moving operation, the addition bit to be switched from the levitation state to a fixed state. In this case, the current addition bit is in a target position.
Through the foregoing editing interface, a finally determined three-dimensional model of the virtual object may present a three-dimensional character model, may further present the three-dimensional border model and a three-dimensional information component, and supports the player to implement a custom editing operation for the three-dimensional model. Therefore, a target three-dimensional model conforming to a demand of the player is finally obtained, a sense of showing off of the play for presenting the three-dimensional model of the player in the virtual scene is satisfied, and human-computer interaction efficiency is effectively improved.
A presentation condition of the three-dimensional model is described. In some embodiments, the terminal may determine the presentation condition of the target three-dimensional model in the following manner: The terminal obtains presentation time of the target three-dimensional model; and when the presentation time is arrived, determines that the presentation condition of the target three-dimensional model is satisfied; or the terminal displays a presentation control of the target three-dimensional model; and when the presentation control is triggered, determines that the presentation condition of the target three-dimensional model is satisfied.
The presentation time is described. In an actual implementation, presentation time of at least one (or at least two) fixed three-dimensional model may be set in the virtual scene. For example, the presentation time includes 10 a.m. and 3 p.m., and if current time is 11 a.m., when the presentation time arrives at 3 p.m., the presentation time is determined.
During actual implementation, based on the target three-dimensional model obtained in the editing interface for the three-dimensional model, the target three-dimensional model of the virtual object may be presented in the virtual scene when the corresponding presentation condition is satisfied. The presentation condition may be determining whether the presentation time of the three-dimensional model of the virtual object is arrived when the virtual scene starts (or game competition starts). When the presentation time is arrived, the terminal determines that the presentation condition of the target three-dimensional model is satisfied. The presentation condition may also be that the presentation control is displayed in the interface of the virtual scene. The terminal receives the triggering operation of the player for the presentation space, and determines that the presentation condition of the target three-dimensional model of the virtual object is satisfied.
Operation 103: Present the target three-dimensional model of the virtual object at the presentation position of the three-dimensional model in the virtual scene.
In an actual implementation, after determining an edited target three-dimensional model, the terminal may directly present the target three-dimensional model of the virtual object at the presentation position of the three-dimensional model in the virtual scene, and may also present the target three-dimensional model at the presentation position of the three-dimensional model in the virtual scene when the presentation condition of the target three-dimensional model of virtual object is satisfied.
During actual implementation, at least one presentation position for presenting the target three-dimensional model is preset in the virtual scene, and a display range (circular area) corresponding to the presentation position is determined with the presentation position as an origin and a preset distance as a radius. When the virtual object enters the display range of the presentation position, the target three-dimensional model is presented at the origin (presentation position) of the display range. When the virtual object has insufficient competition performance to present the corresponding target three-dimensional model of the virtual object in the virtual scene, and when the virtual object is in the display range of the presentation position, the target three-dimensional model of another virtual object having higher competition performance than the virtual object in the current virtual scene may also be presented.
For example, referring to
A manner of presenting the three-dimensional model in the virtual scene is described. In some embodiments, a terminal may present the target three-dimensional model of the virtual object in the virtual scene in the following manner: obtaining a position of the virtual object in the virtual scene when a quantity of presentation positions is at least two; selecting a nearest presentation position to the virtual object from the at least two presentation positions as a target presentation position based on the position of the virtual object in the virtual scene; and presenting the target three-dimensional model of the virtual object at the target presentation position.
During actual implementation, when the virtual object is in the display ranges corresponding to the at least two presentation positions, the presentation position at which the virtual object is the closest distance from the presentation position is selected as the target presentation position at which the three-dimensional model is presented.
For example, referring to
In some embodiments, a terminal may present the target three-dimensional model of a virtual object in a virtual scene in the following manner: when a quantity of presentation positions is at least two, and the at least two presentation positions corresponding to two teams, determining presentation positions of the at least two presentation positions corresponding to the team to which the virtual object belongs; when a quantity of presentation positions corresponding to the team to which the virtual object belongs is at least two, generating duplicates of the target three-dimensional model; and presenting the corresponding duplicates of the target three-dimensional model respectively at the presentation positions corresponding to the team to which the virtual object belongs.
During actual implementation, when there are a plurality of presentation positions for displaying a three-dimensional model in the virtual scene, the plurality of presentation positions may also be grouped based on the team to which each virtual object being interacted with belongs in the virtual scene, to be specific, at least one presentation position is assigned to each team in the virtual scene, and the three-dimensional model of the virtual object may be presented at the presentation position corresponding to the team to which the virtual object belongs. When determining the presentation position corresponding to the three-dimensional model, the terminal obtains an idle state of each presentation position corresponding to the team to which the virtual object belongs; when there is a presentation position in the idle state (that is, no three-dimensional model is presented at the corresponding presentation position) of the presentation positions corresponding to the team, directly presents the three-dimensional model at the presentation position in the idle state; when all the presentation positions corresponding to the team are occupied, compares interaction performance of the another virtual object of a same team indicated based on the three-dimensional model at each presentation position with interaction performance of a current virtual object; and uses the presentation position at which the three-dimensional model of the another virtual object having the lowest interaction performance is located as the presentation position of the target three-dimensional model of the current virtual object.
In some embodiments, the terminal may present the target three-dimensional model of the virtual object in the virtual scene in the following manner: The terminal obtains virtual weather corresponding to the presentation position of the three-dimensional model in the virtual scene; and when target weather is based on the virtual weather, displays the target three-dimensional model in a blurring state at the presentation position of the three-dimensional model in the virtual scene.
In some embodiments, the target weather is a non-sunny day having visibility lower than a visibility threshold, such as a snowy day and a foggy day. During actual implementation, the terminal may also dynamically adjust presentation clarity of a virtual three-dimensional model based on the virtual weather at the presentation position for presenting the three-dimensional model in the virtual scene. When the virtual weather is sufficiently illuminated, a clear three-dimensional model is presented. When the virtual weather is the target weather (such as a cloudy day and a rainy day), clarity of the three-dimensional model is dynamically adjusted. The three-dimensional model in the blurring state is displayed at the presentation position of the three-dimensional model. During actual application, a blurring degree of the three-dimensional model in the blurring state may be inversely correlated with visibility in the virtual scene. In this case, visibility of the three-dimensional model presented in the virtual scene varies based on weather change, and this improves realism of a presentation effect of the three-dimensional model.
In some embodiments, the terminal may present the target three-dimensional model of the virtual object in the virtual scene in the following manner: after presenting the target three-dimensional model of the virtual object at the presentation position, the terminal receives a target operation performed by the target virtual object for the target three-dimensional model; and displays an operation result of the target operation in response to the target operation.
During actual implementation, in a process in which the target three-dimensional model is presented at the corresponding presentation position, after another virtual object in the virtual scene perform a target operation on the target three-dimensional model, the operation result corresponding to an operation of the target three-dimensional model may be presented. For example, when an enemy virtual object performs a destroying operation for the target three-dimensional model, a destroyed target three-dimensional model is presented (the three-dimensional model is incomplete at this time).
By applying the embodiments of this application, the target three-dimensional model of the virtual object can be finally obtained based on an editing operation for the three-dimensional character model, an editing operation for a three-dimensional border model, and a custom editing operation for at least one three-dimensional information component. The target three-dimensional model is presented at the presentation position in the virtual scene when the presentation condition for the target three-dimensional model is satisfied. In this case, it is changed based on a selected language, a competition situation, and performance of the player. Because a direct text and digital information can be quickly conveyed to the player, a presentation of high sense of showing off, a harmonious visual performance, and a low usage threshold are brought to the player.
The following describes an example application during actual application scenario in the embodiments of this application.
In a related shooting game, to consider both computing performance of an electronic device and a presentation effect of a game scene, a three-dimensional model globally presented to a player is generally preset according to a particular rule, and supports a slight customization of the player in limited options. For example, referring to
For example, referring to
Based on this, the embodiments of this application provide a method for processing data in a virtual scene. The virtual scene may be a shooting game scene. By combining a three-dimensional (3D) model, a three-dimensional text (or a three-dimensional image, a 3D UI), the method implements object information transmission of a virtual object in the virtual scene, supports a player to custom edit a three-dimensional virtual character model (also referred to as a character model) consistent with an appearance of a character, a corresponding border model, and an information component mounted on the border model, and finally generates a complete three-dimensional model. The method is that a terminal receives custom data sent by a game server of the virtual scene; and provides, by dynamically creating a Mesh or a UI in a fixed slot (each of the three-dimensional model in the preview area in the editing interface), possibility of autonomously editing a specified part of the three-dimensional model to the player. In this case, various playing designs and commercialization drop designs from upstream are allowed to be carried, to be specific, more flexible editing space and differentiation possibilities can be provided to the player. In addition, both a reduced performance cost and presentation by combining the three-dimensional model (3D model) and three-dimensional text information (including content such as a text and a number) are considered, to provide a presentation system that has a high sense of showing off, a harmonious visual performance, and a low usage threshold to the player.
The following describes a method for processing data in a virtual scene provided in the embodiments of this application from a display aspect of a product side.
First, projected content (a three-dimensional model that needs to be displayed) is determined. A projected player (the projected content is a dynamic 3D model of the player) may be a three-dimensional model corresponding to the player performing the best in previous games or performing better in competition. Corresponding determination logic may be adjusted by a game designer, but it needs to be ensured that remaining players may perform strategic deployment in the competition by using the information.
During actual implementation, referring to
During actual implementation, when a shooting game starts, a virtual object controlled by the player enters the game, and when presentation time of the three-dimensional model of the virtual object controlled by the player is arrived, a 3D model content setting of the player and to-be-presented data information are pulled, the to-be-presented data information is changed from a 2D UI to the 3D UI, the 3D UI and the 3D model are combined into one entire model (that is, the three-dimensional model) for presentation, and entire model presentation is finally obtained. Referring to
An entire model (three-dimensional model) combining a 3D model and a 3D UI is presented in game competition, a word, a number, and an icon in the 3D UI may be changed based on a selected language, a competition situation, and performance. Because direct word information and data information can be quickly transmitted to the player, this provides a presentation system that has a high sense of showing off, a harmonious visual performance, and a low usage threshold to the player.
During actual implementation, referring to
In addition, referring to
The following describes a technical implementation process of a method for processing data in a virtual scene provided in the embodiments of this application from an implementation aspect of a technical side.
From a performance perspective, an increase in the 3D model is accompanied by an increase in a quantity of times of operations (a quantity of drawcalls) that a CPU invokes an underlying graphics interface. This causes performance degradation. From a localization perspective, a conventional 3D model does not satisfy a requirement of multi-language. Therefore, the 3D model and the 3D UI may be combined to present a career card of the corresponding virtual object.
First, a career card that need to be presented is created, an editable static component (three medals and one border) and a UI component (a username, a season master value, a competition performance value, and the like) are mounted, an editable UI class is created, and the 3D UI is aligned to a border.
For example, referring to
To reduce storage load of object information of a virtual object, during actual implementation, editing information may be saved by item identification (ID)->blueprint configuration->mapping method of a blueprint resource. Only the item identification (ID) autonomously edited by the player needs to be saved for a game background. When a career card is displayed, a client consolidates a needed resource path of the card by reading a table to find a related resource path based on item ID autonomously selected by the player and transmitted by a game server, and unifies and asynchronously loads a collated resource. When the resource is loaded, the resource is placed on the card.
To resolve a situation that a 3D UI needs to respond to a click, camera ray detection may be used to send a ray from a currently used camera to a clicked position, detect a collision body on the ray, and obtain a clicked item. This correctly triggers a click event. To achieve decoupling, an event system may also be used. Each click event is sent in an event manner, and is registered on demand. This facilitates later expansion and use of another system.
A Box collision box may also be added in a manufacturing process, a crash response parameter is adjusted, a size of the Box collision box is adjusted to be larger than a size of the career card (a size of an entire model), a vehicle and a person in the virtual scene are prohibited from passing through. This also solves a situation that the 3D UI is not collided, and is easy to have an error.
In this case, the 3D UI can be successfully combined with the 3D model. Therefore, this solves a performance consumption problem and provides sufficient editable space to the player.
During actual implementation, referring to
The embodiments of this application is applied to include the following beneficial effects.
The following continues to describe an exemplary structure in which implementation of a data processing apparatus 555 in a virtual scene provided in the embodiments of this application is a software module. In some embodiments, as shown in
In some embodiments, the apparatus further includes:
In some embodiments, the editing module is further configured to: receive the character editing instruction for the character model based on the editing interface, the character editing instruction being configured to instruct to edit character content of the character model, the character content including at least one of the following: a material, a posture, and an item; display at least one candidate character content corresponding to the character content in response to the character editing instruction; and in response to a selection instruction for the candidate character content, determine selected candidate character content as target character content of the character model, to obtain a target three-dimensional model having the target character content.
In some embodiments, the editing module is further configured to: display a content editing control of the character model in the editing interface, the content editing control including a material control configured for editing a material of the character model, a posture control configured for editing a posture of the character model, and an item control configured for editing an item of the character model; and receive the character editing instruction for the character model in response to a triggering operation for the content editing control.
In some embodiments, the editing interface includes a preview area of the character model, and the editing module is further configured to: display a preview image of the character model in the preview area of the character model; and receive the character editing instruction in response to a triggering operation for a target part in the preview image, the character editing instruction being configured to instruct to edit character content corresponding to the target part, and different sections in the preview image corresponding to different character content.
In some embodiments, the editing interface includes a preview area of the character model, and the editing module is further configured to: in response to a selection instruction for the candidate character content, display, in a preview area of the character model, a preview image of the character model having the target character content.
In some embodiments, the editing module is further configured to display a plurality of candidate postures when the character content comprises the posture.
Correspondingly, in some embodiments, the editing module is further configured to: in response to a selection instruction for the candidate posture, determine a selected candidate posture as a target posture of the character model.
Correspondingly, in some embodiments, the presentation module is further configured to: present, at a presentation position of the three-dimensional model in the virtual scene, a process in which the target three-dimensional model of the virtual object performs the target posture in sequence.
In some embodiments, the editing module is further configured to: receive a border editing instruction for the border model based on the editing interface, the border editing instruction being configured to instruct to edit the border model; display at least one candidate border model in response to the border editing instruction; and in response to a selection instruction for the candidate border model, determine a selected candidate border model as a target border model, to obtain the target three-dimensional model having the target border model.
In some embodiments, the editing module is further configured to: display a border editing control corresponding to the border model in the editing interface; and receive the border editing instruction for the border model in response to a triggering operation for the border editing control.
In some embodiments, the editing interface includes a preview area of the border model, and the editing module is further configured to: display a selected candidate border model in the preview area of the border model in response to a selection instruction for the candidate border model.
In some embodiments, the editing module is further configured to: display at least one addition bit of the information component on the candidate border model in the preview area of the border model; display at least one type of object information of the virtual object in response to a triggering operation for the addition bit; and display an information component corresponding to the object information on the addition bit in response to a selection operation for the object information.
In some embodiments, the editing module is further configured to: display an editing control for editing the information component in the editing interface; display at least one type of object information of the virtual object in response to a triggering operation for the editing control; and in response to a selection operation for the object information, determine the information component corresponding to selected object information as information component of the border model in the target three-dimensional model.
In some embodiments, the presentation module is further configured to: obtain presentation time of the target three-dimensional model; and when the presentation time is arrived, determine that the presentation condition of the target three-dimensional model is satisfied; or display a presentation control of the target three-dimensional model, and when the presentation control is triggered, determine that the presentation condition of the target three-dimensional model is satisfied.
In some embodiments, the presentation module is further configured to: displaying an interface of the virtual scene, and when an editing condition of the three-dimensional model is satisfied, display editing prompt information in the interface of the virtual scene, the editing prompt information being configured to prompt that the target object has a permission to edit the three-dimensional model; and receive the editing instruction triggered based on the editing prompt information.
In some embodiments, the presentation module is further configured to: when at least one of the following is satisfied, determine that the editing condition of the three-dimensional model is satisfied; obtain interaction performance of the virtual object in the virtual scene, the interaction performance reaching a performance threshold; or obtain a virtual resource of the virtual object in the virtual scene, a virtual resource size reaching a resource size threshold.
In some embodiments, the presentation module is further configured to: obtain a position of the virtual object in the virtual scene when there are at least two presentation positions; select, based on the position of the virtual object in the virtual scene, a nearest presentation position of the virtual object from the at least two presentation positions as a target presentation position; and present the target three-dimensional model of the virtual object at the target presentation position.
In some embodiments, the presentation module is further configured to: when there are at least two presentation positions, and the at least two presentation positions correspond to two teams, determine a presentation position corresponding to the team to which the virtual object belongs at the at least two presentation positions; when there are at least two presentation positions corresponding to the team to which the virtual object belongs, generate duplicates of the target three-dimensional model; and present the corresponding duplicates of the target three-dimensional model respectively at the presentation positions corresponding to the team to which the virtual object belongs.
In some embodiments, the presentation module is further configured to: obtain virtual weather corresponding to the presentation position of the three-dimensional model in the virtual scene; and when the virtual weather is target weather, display the target three-dimensional model in a blurring state at the presentation position of the three-dimensional model in the virtual scene.
In some embodiments, the presentation module is further configured to: receive a target operation performed by a target virtual object for the target three-dimensional model; and display an operation result of the target operation in response to the target operation.
The embodiments of this application provide a computer program product or a computer program, the computer program product or the computer program including computer-executable instructions, and the computer-executable instructions being stored in a non-transitory computer-readable storage medium. A processor of an electronic device reads the computer-executable instructions from the computer-readable storage medium, and the processor executes the computer-executable instructions, so that the electronic device performs the method for processing data in a virtual scene in the embodiments of this application.
The embodiments of this application provide a non-transitory computer-readable storage medium of computer-executable instructions, having the computer-executable instructions stored therein. When the computer-executable instructions are executed by a processor, the processor may perform a method for processing data in a virtual scene provided in the embodiments of this application, for example, a sending method of bullets shown in
In some embodiments, the computer readable storage medium may be a memory such as a read-only memory (ROM), a random access memory (RAM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), a flash, a magnetic surface memory, an optical disc, or a CD-ROM, or may be various devices including one or any combination of the foregoing memories.
In some embodiments, the executable instructions may be written in any form of programming language (including a compiled or interpreted language, or a declarative or procedural language) in a form of a program, software, a software module, a script, or code, and may be deployed in any form, including being deployed as an independent program or being deployed as a module, a component, a subroutine, or another unit applicable for use in a computing environment.
For example, the executable instructions may, but do not necessarily correspond to a file in a file system, and may be stored as a part of a file that saves another program or data, for example, stored in one or more scripts in a hypertext markup language (HTML) file, stored in a single file dedicated to a program in discussion, or stored in a plurality of collaborative files (for example, files that store one or more modules, subprograms, or code parts).
For example, the executable instructions may be deployed to be executed on one electronic device, or executed on a plurality of electronic devices located at one location, or executed on a plurality of electronic devices that are distributed in a plurality of locations and interconnected by a communication network.
In conclusion, by the embodiments of this application, there are the following beneficial effects: on-demand editing for a three-dimensional model may be implemented, flexibility of an editing operation is improved, integrity of the three-dimensional model during presentation may be ensured, and a presentation effect of the three-dimensional model is improved.
In sum, the term “module” in this application refers to a computer program or part of the computer program that has a predefined function and works together with other related parts to achieve a predefined goal and may be all or partially implemented by using software, hardware (e.g., processing circuitry and/or memory configured to perform the predefined functions), or a combination thereof. Each module can be implemented using one or more processors (or processors and memory). Likewise, a processor (or processors and memory) can be used to implement one or more modules. Moreover, each module can be part of an overall module that includes the functionalities of the module. The foregoing descriptions described above are merely examples of the embodiments of this application, and this is not intended to limit the protection scope of this application. Any modification, equivalent replacement, and improvement made within the spirit and scope of this application shall fall within the protection scope of this application.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2022-10966075.X | Aug 2022 | CN | national |
This application is a continuation application of PCT Patent Application No. PCT/CN2023/097393, entitled “DATA PROCESSING METHOD AND APPARATUS IN VIRTUAL SCENE, DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT” filed on May 31, 2023, which claims priority to Chinese National Intellectual Property Administration No. 202210966075.X, entitled “DATA PROCESSING METHOD AND APPARATUS IN VIRTUAL SCENE, DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT” filed on Aug. 12, 2022, both of which are incorporated herein by reference in their entirety.
| Number | Date | Country | |
|---|---|---|---|
| Parent | PCT/CN2023/097393 | May 2023 | WO |
| Child | 18761142 | US |