This application relates to the field of computer technologies, and in particular, to virtual object interaction.
With the development of Internet technologies, more and more game applications appear in people's life, and how to increase richness in a virtual scene of a game has become a difficult problem.
Generally, in a target virtual scene, richness of the virtual scene can be improved by adding virtual elements, for example, adding terrain and weather.
However, only by adding the virtual elements, a system space occupied by the game application becomes larger and larger, and a frozen phenomenon is easy to occur in a multi-person interaction scene, so that stability of a virtual element interaction process is affected.
In view of this, this application provides a virtual object interaction method, which can effectively avoid scene freezing caused by adding a large quantity of virtual elements, and improve stability of a virtual object interaction process.
According to one aspect, an embodiment of this application provides a virtual object interaction method, applicable to a system or a program including a function of virtual object interaction in a computer device (e.g., a smartphone), the method specifically including: displaying a target virtual scene for an adversarial game between at least two virtual objects, the target virtual scene comprising a target interaction region;
According to another aspect, an embodiment of this application provides a computer device, including: a memory, a transceiver, a processor, and a bus system, the memory being configured to store program code, and the processor being configured to perform the virtual object interaction method in the foregoing aspects according to instructions in the program code.
According to another aspect, an embodiment of this application provides a non-transitory computer-readable storage medium, storing a computer program, the computer program being configured to perform the virtual object interaction method in the foregoing aspects.
According to still another aspect, an embodiment of this application further provides a computer program product including instructions, the computer program product, when run on a computer, causing the computer to perform the virtual object interaction method in the foregoing aspects.
As can be seen from the foregoing technical solutions, the embodiments of this application have the following advantages:
A target virtual scene for an adversarial game between at least two virtual objects is obtained and displayed, where the target virtual scene includes a target interaction region; an interaction value is then increased in response to a target operation of a first virtual object occupying the target interaction region; and a game outcome based on the interaction value associated with the first virtual object is then determined, e.g., the target interaction region is further updated in the target virtual scene when an existence time of the target interaction region reaches the valid time and the interaction value is lower than a preset value. In this way, a continuous virtual element interaction process is achieved. Because virtual object interaction is only guided by switching the target interaction region in a virtual scene and a large quantity of virtual elements are not introduced, occupancy of resources in the virtual element interaction process is reduced, and stability of the virtual object interaction is improved.
To describe the technical solutions in the embodiments of this application or in the related art more clearly, the following briefly describes the accompanying drawings required for describing the embodiments or the related art. Apparently, the accompanying drawings in the following descriptions show merely the embodiments of this application, and a person of ordinary skill in the art may still derive other drawings from the accompanying drawings without creative efforts.
Embodiments of this application provide a virtual object interaction method and a related apparatus, which may be applicable to a system or a program including a function of virtual object interaction in a terminal device. The method obtains a target virtual scene for an adversarial game between at least two virtual objects, where the target virtual scene includes a first virtual object and a target interaction region, and the target interaction region has a valid time; determines (e.g., increases) an interaction value in response to a target operation of the virtual object in the target interaction region; and determines a game outcome based on the interaction value associated with the first virtual object, e.g., updates the target interaction region in the target virtual scene when an existence time of the target interaction region reaches the valid time and the interaction value is lower than a preset value. In this way, a continuous virtual element interaction process is achieved. Because virtual object interaction is only guided by switching the target interaction region in a virtual scene and a large quantity of virtual elements are not introduced, occupancy of resources in the virtual element interaction process is reduced, and stability of the virtual object interaction is improved.
The terms such as “first”, “second”, “third”, and “fourth” (if any) in the specification and claims of this application and in the accompanying drawings are used for distinguishing between similar objects and not necessarily used for describing any particular order or sequence. Data used in this way may be interchanged in an appropriate case, so that the embodiments of this application described herein can be implemented in a sequence other than the sequence illustrated or described herein. In addition, the terms “include”, “corresponding to” and any other variants are intended to cover the non-exclusive inclusion. For example, a process, method, system, product, or device that includes a series of steps or units is not necessarily limited to those expressly listed steps or units, but may include other steps or units not expressly listed or inherent to such a process, method, product, or device.
The virtual object interaction method provided by this application may be applicable to the system or the program including the function of virtual object interaction in the terminal device, for example, a media content platform. Specifically, the virtual object interaction method may be implemented by a virtual object interaction system, and the virtual object interaction system may be, for example, a network architecture shown in
In this embodiment, the server may be an independent physical server, or may be a server cluster including a plurality of physical servers or a distributed system, or may be a cloud server providing basic cloud computing services, such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a content delivery network (CDN), big data, and an artificial intelligence platform. The terminal may be a smartphone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smartwatch, or the like, but is not limited thereto. The terminal and the server may be directly or indirectly connected in a wired or wireless communication manner. This is not limited in this application.
The virtual object interaction method provided in this embodiment may alternatively be performed offline, that is, without participation of a server, for example, for a stand-alone game, a terminal is locally connected with another terminal, to perform a virtual object interaction process between terminals.
The virtual object interaction method may run on the foregoing terminal device, for example, a mobile terminal installed with a media content platform application, run on a server, or run on a third-party device to provide virtual object interaction, so as to obtain a processing result of virtual object interaction of the information sources. A specific virtual object interaction system may run on the above device in a form of a program, run as a system component in the above device, or be used as one of cloud service programs. A specific active mode depends on an actual scene, which is not limited herein.
To resolve the problems such as freezing caused by adding virtual elements and provide a richer interaction mode, this application provides a virtual object interaction method, and the method is applicable to a procedure framework of virtual object interaction shown in
The method provided by this application may be writing of a program, to be used as one type of processing logic in a hardware system, or be used as a virtual object interaction apparatus, and implement the foregoing processing logic in an integrated or externally connected manner. As an implementation, the virtual object interaction apparatus obtains a target virtual scene for an adversarial game between at least two virtual objects, where the target virtual scene includes a first virtual object and a target interaction region, and the target interaction region has a valid time; determines (e.g., increases) an interaction value in response to a target operation of the virtual object in the target interaction region; and determines a game outcome based on the interaction value associated with the first virtual object, e.g., updates the target interaction region in the target virtual scene when an existence time of the target interaction region reaches the valid time and the interaction value is lower than a preset value. In this way, a continuous virtual element interaction process is achieved. Because virtual object interaction is only guided by switching the target interaction region in a virtual scene and a large quantity of virtual elements are not introduced, occupancy of resources in the virtual element interaction process is reduced, and stability of the virtual object interaction is improved.
With reference to the foregoing procedure and architecture, the virtual object interaction method in this application is described below.
301. Obtain and display a target virtual scene.
In this embodiment, the target virtual scene includes virtual objects and a target interaction region, and the target interaction region is used for guiding interaction between the virtual objects. The virtual objects may belong to different camps, the camps are parties to which the virtual objects belong, and the virtual objects of different camps may perform element interaction, for example, deduction of hit point values after firefight. The target interaction region is a dynamic region, that is, the target interaction region may be randomly generated in the target virtual scene, and the virtual objects need to move to the region to obtain corresponding virtual rewards, to obtain corresponding interaction values.
The target interaction region has a valid time. Specifically, the valid time may be a duration of which the target interaction region exists. When the target virtual scene starts to be obtained, an updating countdown of the target interaction region is enabled based on the valid time. For example, the countdown starts from the duration, and the target interaction region is considered to be in the valid time until the countdown returns to zero. During the valid time, the virtual objects need to enter the target interaction region to obtain the interaction values. If an existence time of the target interaction region reaches the valid time (the countdown returns to zero), the target interaction region may disappear, and may be refreshed and appear in another position in the target virtual scene based on a mechanism. In this case, the interaction values cannot be obtained on positions of the virtual objects in an original target interaction region.
In another possible scene, the valid time may alternatively be a time period during which the target interaction region exists after the virtual objects enter the target interaction region. That is, when the virtual objects enter the target interaction region, display of the target interaction region starts, based on the countdown of the valid time, to be canceled or be further refreshed after the countdown ends.
In some embodiments, the target virtual scene may be determined through a candidate virtual scene, for example, a process of map selection. Specifically, the candidate virtual scene is first determined in response to a selection instruction; a target hotspot is then generated based on the candidate virtual scene; the target interaction region is then generated according to the target hotspot; and the target interaction region is further deployed in the candidate virtual scene based on the target hotspot, to determine the target virtual scene. In a specific scene,
In a possible scene, a target hotspot used for indicating a center of a target interaction region may be displayed in a target virtual scene.
A process of generating prompt information based on a target hotspot may include a prompt direction or a distance, and indicates the prompt direction or the distance on an interface. Specifically, the prompt information is first generated based on the target hotspot, where the prompt information is used for indicating a direction or a distance; and a second virtual element in a target virtual scene is then updated according to the prompt information, where the second virtual element is used for guiding a virtual object to approach a target interaction region. For example, the second virtual element may be a guide arrow indicating a travel direction of the virtual object.
In some embodiments, the prompt information may alternatively be displayed through a third virtual element (a minimap), that is, position information of a target hotspot in a third virtual element is first determined, where the third virtual element is used for guiding a virtual object to approach a target interaction region; position coordinates of the virtual object in the third virtual element are then determined; and the prompt information is further determined based on the position information and the position coordinates. In a possible scene,
In some embodiments, the prompt information may alternatively be determined based on a position of a virtual object relative to a target hotspot in a target virtual scene. That is, an indication direction and position coordinates of the virtual object are first determined, where the indication direction is used for indicating a first direction line toward which the virtual object faces; a second direction line is then generated based on the position coordinates and the target hotspot; and the prompt information is further generated based on the first direction line and the second direction line. Specifically,
In a possible scene, the prompt information shown in
302. Determine an interaction value in response to a target operation of a virtual object occupying the target interaction region.
In this embodiment, the interaction value may also be referred to as a credit, a score, or the like. A specific interaction value may be determined and obtained through occupancy information, and the occupancy information is determined based on an occupancy time of the virtual object of a single camp in the target interaction region. The occupancy time is the occupancy time of the virtual object of the single camp in the target interaction region, namely, a time of which the target interaction region exists the virtual object of the single camp. If a virtual object of another camp enters the target interaction region, occupancy is not formed. In this case, one party of virtual object needs to destroy another party, to continuously update the occupancy information, thereby promoting interaction between virtual objects. That is, the target operation is an operation that the one party maintains himself in the target interaction region.
The interaction value varies with a change of the occupancy time of the virtual object in the target interaction region. Therefore, determining of the interaction value may be varied dynamically based on the occupancy time, may specifically use a certain function relationship, for example, the occupancy time is proportional to the interaction value, that is, the interaction value increases by 10 points per 1 second of occupancy.
In another possible scene, the interaction value may alternatively be obtained based on the occupancy time and interaction data indicated by the occupancy information, where the interaction data is battle data between the virtual objects. Specifically, the interaction value increasing by 10 points per 1 second of occupancy may be set, and the interaction value increases by 5 points each time a virtual object of a different camp is eliminated. Because a process of adjusting the interaction value based on the occupancy time stops after a virtual object of a different camp enters the target interaction region, an adjustment dimension of the interaction data is introduced in this case, which improves representativeness of the interaction value, and improves an interaction frequency of virtual users in a target virtual region.
Because the target interaction region has an update condition, that is, the target interaction region may be updated after existing for a certain duration, and the update condition may be displayed in a manner of a timer. In a case that a virtual object does not enter the target interaction region.
In another scene, after a virtual object enters the target interaction region, a corresponding timer may also be generated.
In some embodiments, a determination for which the virtual object enters the target interaction region may be performed based on triggering of a collision box. Specifically, a boundary collision box corresponding to the target interaction region is first determined; the boundary collision box is then triggered based on a target operation of a virtual object of a single camp, to start the timer shown in
Correspondingly, after the user triggers the collision box and enters the target interaction region, an occupancy duration may be counted to count the interaction value.
In a virtual object occupancy process, the background may detect occupancy objects in the target interaction region in real time, and the occupancy objects are counted based on a camp of the virtual object. If the occupancy objects meet a stop threshold, the timer is stopped. The stop threshold is a quantity of camps existing in the target interaction region. Specifically, the stop threshold may be 2, that is, an interaction value may be obtained only when one camp occupies the target interaction region. The stop threshold may also be a numerical value greater than 2, that is, when more camps participate in a battle, a plurality of camps obtaining interaction values in the target interaction region may be set, for example: when there are thirty camps participating in a battle, the stop threshold may be set to 5, that is, virtual objects of at most four camps run in the target interaction region to obtain interaction values. A specific numerical value depends on an actual scene, which is not limited herein. By setting the stop threshold, interaction efficiency between the virtual objects is improved, and accuracy of the interaction value is ensured.
In a possible scene, when a virtual object occupies the target interaction region, a first virtual element (for example, a flickering mask) used for prompting that a user has occupied may be invoked, that is, the first virtual element is invoked in response to starting of a timer, where the first virtual element is used for indicating that an interaction value has changed. The target virtual scene is then updated based on the first virtual element. Specifically.
In addition, after the virtual object of another camp enters the target interaction region, element interaction needs to be performed between the virtual objects, to obtain an interaction value again. Specifically.
303. Update the target interaction region in the target virtual scene when an existence time of the target interaction region reaches the valid time and the interaction value is lower than a preset value.
In this embodiment, an update process may be performed in the case that the existence time of the target interaction region reaches the valid time and the interaction value is lower than the preset value, so as to update the target interaction region, and display the updated target interaction region in the target virtual scene; and so as to further guide virtual objects to perform interaction operations based on guiding of the updated target interaction region. A duration corresponding to the existence time of the target interaction region reaching a duration corresponding to the valid time may also be understood as that a countdown based on the duration corresponding to the valid time is zero in a current time.
For a scene that the interaction value does not reach the preset value and the duration corresponding to the existence time of the target interaction region does not reach the duration corresponding to the valid time, the interaction operations may be performed continuously between the virtual objects. That is, a virtual object may continuously obtain the interaction value by occupying the target interaction region, and an operation of the virtual object is ended until the interaction value reaches the preset value, that is, a game battle in which the virtual object is manipulated is ended.
Further, the update process of the target interaction region may be performed when the interaction value does not reach the preset value and the duration corresponding to the existence time of the target interaction region reaches the duration corresponding to the valid time, where the preset value may be a numerical value, for example, the preset value is 150. Therefore, the preset value not being reached means that the interaction value does not reach 150. The preset value may alternatively indicate a difference value between interaction values of virtual objects of different camps, that is, the preset value not being reached means that the difference value between the interaction values of the virtual objects of different camps does not reach 100. A specific numerical value depends on an actual scene, which is not limited herein.
After it is determined that the interaction value does not reach the preset value and the duration corresponding to the existence time of the target interaction region reaches the duration corresponding to the valid time, the target interaction region may be updated. A specific update process may be to delete an original target interaction region in the target virtual scene, and then randomly generate one target interaction region in the target virtual scene. In another possible scene, positions of a plurality of target interaction regions may be preset, but each time only one target interaction region may be displayed in the target virtual scene, and switching is performed between the positions of the plurality of preset target interaction regions. Each of the preset target interaction regions is independent of each other, that is, the updated target interaction region does not have an associated relationship with other preset target interaction regions.
In some embodiments, to ensure that virtual objects need to move during switching the target interaction region, it may be set that the updated target interaction region does not include the virtual objects. Specifically, real-time position coordinates of the virtual objects in the target virtual scene are first determined; and then, region position coordinates of the target interaction region are updated based on the real-time position coordinates, and the target interaction region may be updated in a position indicated by the region position coordinates. That is, a region not including the virtual objects in the target virtual scene is selected to set the target interaction region, thereby ensuring that each of the virtual objects needs to move to obtain an interaction value, and improving interaction efficiency of the virtual objects.
In some embodiments, to ensure similarity of movement distances of the virtual objects, that is, to prevent the updated target interaction region from being too close to a virtual object, so as to facilitate the virtual object to obtain an interaction value and cause an imbalance of the interaction process, a process of updating the target interaction region shown in
In some embodiments, because there may be obstacles in the target virtual scene, for example, impenetrable terrain, houses, walls, and other virtual elements, virtual users need to detour to reach the updated target interaction region in this case, so that travel distances of the virtual users are inconsistent. This application provides a manner of determining region position coordinates based on this, including: determining obstacle information surrounding the virtual objects according to the real-time position coordinates; generating route information based on distance weight information corresponding to the obstacle information, where the route information is used for indicating distances of the virtual objects from the updated target interaction region; and updating the region position coordinates of the target interaction region based on the route information.
In this case, a process of updating the target interaction region shown in
In a possible scene, there are two obstacles on a route between the virtual object 1 and a target hotspot, and there is no obstacle on a route between the virtual object 2 and the target hotspot. Therefore, allocation on a distance of the virtual object 1 from the target hotspot and a distance of the virtual object 2 from the target hotspot may be set as that the distance of the virtual object 2 is twice the distance of the virtual object 1. A specific numerical value transformation relationship depends on an actual scene, which is not limited herein. By setting weight information, fairness of the updated target interaction region relative to each of the virtual objects is guaranteed, and user experience is improved.
Determination for entering the target interaction region is set based on a collision box, and in this case, if the collision box is set on an obstacle, a virtual object may not be able to enter the target interaction region. Therefore, obstacle information needs to be considered when setting the collision box. Specifically, during setting the collision box, the obstacle information in the target virtual scene is first determined; a boundary collision box is then adjusted based on the obstacle information, so that a region corresponding to the obstacle information does not include the boundary collision box. In a possible scene,
A process of obtaining obstacles and updating the collision boxes may be performed based on the process of obtaining the target virtual scene in step 301, that is, the obstacles are adjusted in the process of obtaining the target virtual scene; or be performed in a process of updating the target interaction region in this step, that is, surrounding obstacles are adjusted after the position of the updated target interaction region is determined.
In addition, for a scene that the interaction value reaches the preset value, that is, when the interaction value reaches the preset value, an interaction value corresponding to each virtual object is displayed, and an operation of the virtual object is ended, that is, a game battle in which the virtual object is manipulated is ended. The preset value may be a specific numerical value, for example, the preset value is 150. Therefore, the preset value being met means that the interaction value reaches 150. The preset value may alternatively indicate a difference value between interaction values of virtual objects of different camps, that is, the difference value between the interaction values of the virtual objects of different camps reaching 100 means that the preset value is reached. A specific numerical value depends on an actual scene, which is not limited herein.
In a possible scene, a score (an interaction value) of a camp reaches the target numerical value. In this case, once the camp reaches a threshold, an interface shown in
With reference to the foregoing embodiments, a target virtual scene is obtained, where the target virtual scene includes a virtual object and a target interaction region, and the target interaction region has a valid time; an interaction value is determined in response to a target operation of the virtual object in the target interaction region; and the target interaction region is further updated in the target virtual scene when an existence time of the target interaction region reaches the valid time and the interaction value is lower than a preset value. In this way, a continuous virtual element interaction process is achieved. Because virtual object interaction is only guided by switching the target interaction region in a virtual scene and a large quantity of virtual elements are not introduced, occupancy of resources in the virtual element interaction process is reduced, and stability of the virtual object interaction is improved.
The foregoing embodiments describe the virtual object interaction process. The following is a description with reference to a specific response process.
With reference to the foregoing embodiments, dynamic generation of a hotspot improves interaction efficiency between virtual objects, and improves user experience.
For the convenience of better implementing the foregoing solutions in the embodiments of this application, the following further provides a related apparatus configured to implement the foregoing solutions.
In some embodiments, in some possible implementations of this application, the determining unit 1902 is specifically configured to determine occupancy information in response to the target operation of the virtual object in the target interaction region, the occupancy information being set based on an occupancy time of the virtual object in the target interaction region; and
In some embodiments, in some possible implementations of this application, the determining unit 1902 is specifically configured to determine interaction data of the virtual object in the target interaction region; and
In some embodiments, in some possible implementations of this application, the determining unit 1902 is specifically configured to determine a boundary collision box corresponding to the target interaction region;
In some embodiments, in some possible implementations of this application, the determining unit 1902 is specifically configured to detect occupancy objects in the target interaction region, the occupancy objects being used for identifying camp counts based on the virtual object, and
In some embodiments, in some possible implementations of this application, the determining unit 1902 is specifically configured to invoke a first virtual element in response to starting of the timer, the first virtual element being used for indicating a change of the interaction value; and
In some embodiments, in some possible implementations of this application, the interaction unit 1903 is specifically configured to determine real-time position coordinates of the virtual object in the target virtual scene; and
In some embodiments, in some possible implementations of this application, the interaction unit 1903 is specifically configured to determine obstacle information surrounding the virtual object according to the real-time position coordinates;
In some embodiments, in some possible implementations of this application, the obtaining unit 1901 is specifically configured to determine a candidate virtual scene in response to a selection instruction;
In some embodiments, in some possible implementations of this application, the obtaining unit 1901 is specifically configured to generate prompt information based on the target hotspot, the prompt information being used for indicating a direction or a distance; and
In some embodiments, in some possible implementations of this application, the obtaining unit 1901 is specifically configured to determine an indication direction and position coordinates of the virtual object, the indication direction being used for indicating a first direction line toward which the virtual object faces;
In some embodiments, in some possible implementations of this application, the obtaining unit 1901 is specifically configured to determine position information of the target hotspot in a third virtual element, the third virtual element being used for guiding the virtual object to approach the target interaction region;
A target virtual scene is obtained, where the target virtual scene includes a virtual object and a target interaction region, and the target interaction region has a valid time; an interaction value is then determined in response to a target operation of the virtual object in the target interaction region; and the target interaction region is further updated in the target virtual scene when an existence time of the target interaction region reaches the valid time and the interaction value is lower than a preset value. In this way, a continuous virtual element interaction process is achieved. Because virtual object interaction is only guided by switching the target interaction region in a virtual scene and a large quantity of virtual elements are not introduced, occupancy of resources in the virtual element interaction process is reduced, and stability of the virtual object interaction is improved.
An embodiment of this application further provides a terminal device.
The following makes a detailed description of the components of the mobile phone with reference to
The RF circuit 2010 may be configured to receive and send a signal in an information receiving and sending process or a call process, and in particular, after downlink information of a base station is received, send the downlink information to the processor 2080 for processing. In addition, the RF circuit transmits uplink data to the base station. Usually, the RF circuit 2010 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), and a duplexer. In addition, the RF circuit 2010 may also communicate with a network and another device by wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile Communications (GSM), general packet radio service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 2020 may be configured to store a software program and a module. The processor 2080 runs the software program and the module that are stored in the memory 2020, to perform various functional applications and data processing of the mobile phone. The memory 2020 may mainly include a program storage region and a data storage region. The program storage region may store an operating system, an application program required by at least one function (for example, a sound playback function and an image playback function), or the like. The data storage region may store data (for example, audio data and a phone book) created according to use of the mobile phone. In addition, the memory 2020 may include a high speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device or other non-volatile solid state storage devices.
The input unit 2030 may be configured to receive input digit or character information, and generate a keyboard signal input related to the user setting and function control of the mobile phone. Specifically, the input unit 2030 may include a touch panel 2031 and another input device 2032. The touch panel 2031, which may also be referred to as a touchscreen, may collect a touch operation of a user on or near the touch panel (such as an operation of a user on or near the touch panel 2031, and an air touch operation of the user within a certain range on the touch panel 2031 by using any suitable object or accessory such as a finger or a stylus), and drive a corresponding connection apparatus according to a preset program. In some embodiments, the touch panel 2031 may include two parts: a touch detection apparatus and a touch controller. The touch detection apparatus detects a touch orientation of the user, detects a signal brought by the touch operation, and transmits the signal to the touch controller. The touch controller receives touch information from the touch detection apparatus, converts the touch information into a contact coordinate, then transmits the contact coordinate to the processor 2080, and receives and executes a command transmitted by the processor 2080. In addition, the touch panel 2031 may be implemented by using various types, such as a resistive type, a capacitive type, an infrared type, and a surface acoustic wave type. In addition to the touch panel 2031, the input unit 2030 may further include another input device 2032. Specifically, another input device 2032 may include, but is not limited to, one or more of a physical keyboard, a functional key (such as a volume control key or a switch key), a track ball, a mouse, and a joystick.
The display unit 2040 may be configured to display information inputted by the user or information provided for the user, and various menus of the mobile phone. The display unit 2040 may include a display panel 2041. In some embodiments, the display panel 2041 may be configured by using a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like. Further, the touch panel 2031 may cover the display panel 2041. After detecting a touch operation on or near the touch panel 2031, the touch panel 2031 transfers the touch operation to the processor 2080, to determine a type of a touch event. Then, the processor 2080 provides a corresponding visual output on the display panel 2041 according to the type of the touch event. Although in
The mobile phone may further include at least one sensor 2050 such as an optical sensor, a motion sensor, and other sensors. Specifically, the optical sensor may include an ambient light sensor and a proximity sensor. The ambient light sensor may adjust luminance of the display panel 2041 according to brightness of the ambient light. The proximity sensor may switch off the display panel 2041 and/or backlight when the mobile phone is moved to the ear. As one type of motion sensor, an acceleration sensor can detect magnitude of accelerations in various directions (generally on three axes), may detect magnitude and a direction of the gravity when static, and may be applied to an application that recognizes the attitude of the mobile phone (for example, switching between landscape orientation and portrait orientation, a related game, and magnetometer attitude calibration), a function related to vibration recognition (such as a pedometer and a knock), and the like. Other sensors, such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which may be configured in the mobile phone, are not further described herein.
Wi-Fi belongs to a short distance wireless transmission technology. The mobile phone may help, by using the Wi-Fi module 2070, a user to receive and transmit an email, browse a web page, access stream media, and the like. This provides wireless broadband Internet access for the user. Although
The processor 2080 is a control center of the mobile phone, and is connected to various parts of the entire mobile phone by using various interfaces and lines. By running or executing a software program and/or module stored in the memory 2020, and invoking data stored in the memory 2020, the processor executes various functions of the mobile phone and performs data processing, thereby monitoring the entire mobile phone. In some embodiments, the processor 2080 may include one or more processing units, the processor 2080 may integrate an application processor and a modem. The application processor mainly processes an operating system, a user interface, an application program, and the like. The modem mainly processes wireless communication. It may be understood that the foregoing modem may not be integrated into the processor 2080.
The processor 2080 is specifically configured to obtain a target virtual scene, where the target virtual scene includes virtual objects of at least two camps, the target virtual scene includes at least two target interaction regions independent of each other, the target interaction regions are switched in the target virtual scene according to an update condition, and the target interaction regions are used for guiding the virtual objects to perform object interaction in a valid time.
The processor 2080 is specifically configured to determine interaction values in response to target operations of the virtual objects in the target interaction regions, where the interaction values are obtained based on occupancy information of the virtual objects in the target interaction regions, and the occupancy information is used for indicating occupancy situations of the virtual objects of single camps in the target interaction regions.
The processor 2080 is specifically configured to determine a corresponding interaction process based on interaction information, where the interaction process includes switching of the target interaction regions or ending of the operations of the virtual objects.
Although not shown in the figure, the mobile phone may further include a camera, a Bluetooth module, and the like, and details are not described herein.
In the embodiments of this application, the processor 2080 included in the terminal further has functions of performing steps of the foregoing page processing method.
An embodiment of this application further provides a computer-readable storage medium, storing a computer program, the computer program being configured to perform the steps performed by the virtual object interaction apparatus in the method described in the embodiments shown in
An embodiment of this application further provides a computer program product, including virtual object interaction instructions, the computer program product, when run on a computer, causing the computer to perform the steps performed by the virtual object interaction apparatus in the method described in the embodiments shown in
An embodiment of this application further provides a virtual object interaction system, and the virtual object interaction system may include the virtual object interaction apparatus in the embodiment shown in
The virtual object interaction system is specifically configured to obtain a target virtual scene, where the target virtual scene includes virtual objects of at least two camps, the target virtual scene includes at least two target interaction regions independent of each other, the target interaction regions are switched in the target virtual scene according to an update condition, and the target interaction regions are used for guiding the virtual objects to perform object interaction in a valid time.
The virtual object interaction system is specifically configured to determine interaction values in response to target operations of the virtual objects in the target interaction regions, where the interaction values are obtained based on occupancy information of the virtual objects in the target interaction regions, and the occupancy information is used for indicating occupancy situations of the virtual objects of single camps in the target interaction regions.
The virtual object interaction system is specifically configured to determine a corresponding interaction process based on interaction information, where the interaction process includes switching of the target interaction regions or ending of the operations of the virtual objects.
A person skilled in the art can clearly understand that for convenience and conciseness of description, for specific working processes of the foregoing described system, apparatus and unit, refer to the corresponding processes in the foregoing method embodiments, and details are not described herein.
In the several embodiments provided in this application, the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely exemplary. For example, the unit division is merely logical function division and may be other division during actual implementation. For example, multiple units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electric, mechanical, or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual requirements to achieve the objectives of the solutions in the embodiments.
In addition, functional units in the embodiments of this application may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in a form of a software functional unit.
When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, the integrated unit may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the related art, or the entire or some of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium and includes several instructions for instructing a computer device (which may be a PC, a virtual object interaction apparatus, a network device, or the like) to perform all or some of the steps of the methods described in the embodiments of this application. The foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc. In sum, the term “unit” or “module” in this application refers to a computer program or part of the computer program that has a predefined function and works together with other related parts to achieve a predefined goal and may be all or partially implemented by using software, hardware (e.g., processing circuitry and/or memory configured to perform the predefined functions), or a combination thereof. Each unit or module can be implemented using one or more processors (or processors and memory). Likewise, a processor (or processors and memory) can be used to implement one or more modules or units. Moreover, each module or unit can be part of an overall module that includes the functionalities of the module or unit.
The foregoing embodiments are merely intended for describing the technical solutions of this application, but not for limiting this application. It is to be understood by a person of ordinary skill in the art that although this application has been described in detail with reference to the foregoing embodiments, modifications can be made to the technical solutions described in the foregoing embodiments, or equivalent replacements can be made to some technical features in the technical solutions, without departing from the spirit and scope of the technical solutions of the embodiments of this application.
This application is a continuation application of PCT Patent Application No. PCT/CN2021/094270, entitled “METHOD FOR INTERACTION WITH VIRTUAL OBJECT AND RELATED DEVICE” filed on May 18, 2021, which claims priority to Chinese Patent Application No. 202010523186.4, filed with the State Intellectual Property Office of the People's Republic of China on Jun. 10, 2020, and entitled “VIRTUAL OBJECT INTERACTION METHOD AND RELATED APPARATUS”, all of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
20140094317 | Takagi | Apr 2014 | A1 |
20140235334 | Tarumi | Aug 2014 | A1 |
Number | Date | Country |
---|---|---|
108619721 | Oct 2018 | CN |
108786114 | Nov 2018 | CN |
110711382 | Jan 2020 | CN |
110898428 | Mar 2020 | CN |
111672125 | Sep 2020 | CN |
Entry |
---|
CN Pub. 110711382A English Translation via Google Patents, Jan. 21, 2020 (Year: 2020). |
Tencent Technology, WO, PCT/CN2021/094270, Sep. 29, 2022, 4 pgs. |
Tencent Technology, Iprp, PCT/CN2021/094270, Dec. 13, 2022, 5 pgs. |
Tencent Technology, ISR, PCT/CN2021/094270, Aug. 17, 2021, 2 pgs. |
Number | Date | Country | |
---|---|---|---|
20220266143 A1 | Aug 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2021/094270 | May 2021 | WO |
Child | 17743291 | US |