The present invention relates to the technical field of game data processing, in particular, to a data processing method and server based on voxel data, a medium, and a computer program product.
A game scene is a collection of all scene elements in a virtual space in a video game, including contents such as map landforms, buildings, game characters, and equipment items. An interface of the game scene seen by a game user is often displayed in the form of a pixel scene, that is, the contents of the game scene are displayed on a display screen according to a pixel data format.
In a multiplayer competitive game, an AI functions often need to be added, that is, one or more simulated players are added in a game, and behaviors of the simulated players are calculated by a server.
In the existing technical means, behavior trees are commonly used to calculate AI behaviors. If this calculation method is performed by using a CPU, traversal of tree data structures and detection and navigation tasks that consume CPU performance need to be performed. For high real-time competitive games, calculation efficiency is low and business needs cannot be met.
In order to overcome the technical defects, an objective of the present invention is to provide a data processing method and server based on voxel data, a medium, and a computer program product with higher operation efficiency.
The present invention discloses a data processing method based on voxel data, original data of a game scene constitutes a pixel scene, the pixel scene includes multiple different data types of scene elements, and the method includes:
Preferably, the scene element includes at least one of terrain, vegetation, a building and an outdoor decoration;
Preferably, the setting an expected side length of a unit voxel, and combined with the side length of the unit voxel, converting the original data of the multiple types of scene elements into voxel data respectively includes:
Preferably, the method further includes:
Preferably, the obtaining a voxel area within a target space range of a target object by clipping in the voxel scene, and using the voxel area as an input of a neural network in the form of a three-dimensional tensor, to obtain spatial features of the target object includes:
Preferably, the obtaining a voxel area within a target space range of a target object by clipping in the voxel scene, and using the voxel area as an input of a neural network in the form of a three-dimensional tensor, to obtain spatial features of the target object includes:
Preferably, the obtaining a voxel area within a target space range of a target object by clipping in the voxel scene, and using the voxel area as an input of a neural network in the form of a three-dimensional tensor, to obtain spatial features of the target object includes:
Preferably, the using the voxel data in the voxel scene for a specified target object and a field of view, and obtaining a depth map with a resolution of M*N through ray detection includes:
Preferably, the performing ray detection on the M*N paths along a direction from the starting point to the ending point until a solid point in the path is detected includes:
Where the path includes a starting point and an ending point;
Preferably, the method further includes:
Preferably, an area range of the field of view scene is an area range of the viewing cone of the game object; and
Preferably, based on a decision-making behavior that has occurred for the game object, the determining whether the decision-making behavior meets an occurrence condition in the field of view scene includes:
Preferably, the calculating the field of view scene that should be obtained by the game object at the current time includes:
Preferably, based on a decision-making behavior that has occurred for the game object, the determining whether the decision-making behavior meets an occurrence condition in the field of view scene, and if the occurrence condition is not met, considering that there is a fraud behavior includes:
Preferably, the obtaining a navigation path and a traveling manner between the game object and a destination by calculation includes:
Preferably, the step of determining through ray detection includes:
Preferably, the step of determining through ray detection further includes: performing ray detection on m paths in the GPU:
Preferably, the method further includes:
Preferably, the method further includes:
The present invention further discloses a data processing server based on voxel data, where original data of a game scene constitutes a pixel scene, and the pixel scene includes multiple different data types of scene elements, and the server includes:
The present invention further discloses a computer-readable storage medium for storing a data processing instruction based on voxel data, where original data of the game scene constitutes a pixel scene, and the pixel scene includes multiple different data types of scene elements, and when the instruction is executed, the following steps are performed:
The present invention further discloses a computer program product, including a computer-executable instruction, where the instruction is executed by a processor to implement the following steps:
After adopting the above technical solution, compared with the prior art, the technical solution has the following beneficial effects:
Advantages of the present invention are further described below with reference to the drawings and specific embodiments.
The exemplary embodiments are described in detail herein, and examples thereof are shown in the accompanying drawings. When the following description involves the drawings, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements. The implementations described in the following exemplary embodiments do not represent all implementations consistent with the present disclosure. Rather, the implementations are merely examples of devices and methods consistent with some aspects of the present disclosure as detailed in the appended claims.
The terms used in the present disclosure are only for the purpose of describing specific embodiments, and are not intended to limit the present disclosure. The singular forms of “a”, “said” and “the” used in the present disclosure and the appended claims are also intended to include plural forms, unless the context clearly indicates other meanings. It should further be understood that the term “and/or” as used herein refers to and includes any or all possible combinations of one or more associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the present disclosure to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other. For example, without departing from the scope of the present disclosure, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information. Depending on the context, the word “if” as used herein can be interpreted as “when” or “while” or “in response to determining”.
In the description of the present invention, it should be understood that the orientation or positional relationship indicated by the terms “longitudinal”, “lateral”, “upper”, “lower”, “front”, “rear”, “left”, “right”, “vertical”, “horizontal”, “top”, “bottom”, “inner”, “outer”, etc. are based on the orientation or positional relationship shown in the drawings, and are only for the convenience of describing the present invention and simplifying the description, and do not indicate or imply that the pointed device or element must have a specific orientation, or be constructed and operated in a specific orientation, and therefore cannot be understood as a limitation of the present invention.
In the description of the present invention, unless otherwise specified and limited, it should be noted that the terms “installed”, “joint”, and “connection” should be understood in a broad sense. For example, the connection can be a mechanical connection or an electrical connection, or may be internal communication between two elements, or may be a direct connection or an indirect connection through an intermediate medium. For the person of ordinary skill in the art, the specific meaning of the above terms can be understood according to specific conditions.
In the subsequent descriptions, a suffix such as “module”, “component”, or “unit” used to represent an element is merely used to facilitate description of the present invention, and does not have specific meanings. Therefore, “module” and “component” can be mixed for using.
Regarding a voxel scene, a voxel (a portmanteau of the words volumetric and pixel) is a volume element, and a volume described as voxels can be visualized either by volume rendering or by the extraction of polygon iso-surfaces that follow the contours of given threshold values. A voxel is a smallest unit of digital data in division of a three-dimensional space, and the unit voxel mentioned in the present invention can be understood as a single voxel. Voxels are used in fields such as 3D imaging, scientific data, and medical imaging, and is conceptually analogous to a pixel, which is the smallest unit of a two-dimensional space, and the pixel is used in image data of a two-dimensional computer image. Some true 3D displays use the voxels to describe their resolution, for example, a display that can display 512×512×512 voxels.
In the field of 3D imaging technologies, a CPU-based computing mode is usually adopted, that is, various logical task operations and data preprocessing are performed on a CPU. However, the GPU is more suitable for concurrent data operations, such as image rendering. The combination effect of the existing spatial data structure and the GPU is not ideal, and the existing spatial data structure takes pixel data as the mainstream, and consequently the high-concurrency computing performance of the GPU cannot be reflected. In the present invention, the pixel data used for 3D imaging is converted into voxel data, which is highly compatible with the high-concurrency computing characteristics of the GPU, and various operations are performed. Compared with the existing CPU computing mode, the performance of the GPU-based data computing solution is 2 to 3 orders of magnitude higher than that of the CPU-based data computing mode. Therefore, the voxel data is used for high-performance ray detection to sense the environment, and calculation efficiency is high.
Specifically, referring to
S100: Export original data of multiple scene elements respectively.
S200. Set an expected side length of a unit voxel, and combined with the side length of the unit voxel, convert the original data of multiple scene elements into voxel data respectively, where the voxel data is expressed as a voxel module in the voxel scene.
S300. According to relative positions of the voxel modules of all scene elements in the pixel scene, splice the voxel modules of all scene elements to obtain a voxel scene.
In a preferred embodiment of the present invention, the game scene is a game world created by a UE4 game engine. Referring to
However, data types used by different scene elements are different when the scene elements are constructed. According to the types and characteristics of the scene elements, the original data of different scene elements are exported from a UE4 game engine in different ways. Specifics are as follows:
The outdoor decorations and the buildings belong to an Actor type including StaticMesh in the UE4 game engine, OBJ files can be directly exported, and coordinate information of Actors can be simultaneously exported. The OBJ files are 3D model files.
The vegetation does not belong to an independent Actor type in UE4. Therefore, coordinates and shapes need to be recorded, and the coordinates and shapes are exported as a CSV information file. The CSV information file is a comma-separated value file.
Only surface height information is used for the terrain in the voxel world, and terrain image data can be exported by adopting a method of orthogonally shooting with a depth camera in the present invention.
The export methods of the multiple types of original data can be implemented by referring to the description document of the UE4 game engine, and are the technical means mastered by persons skilled in the art.
Because of different types of scene elements and respective characteristics, different conversion methods are required to convert various types of original data of different scene elements into voxel data represented as voxel modules in the game scene. First, an expected side length of the unit voxel needs to be set, a voxel module includes one or multiple unit voxels, and combined with the side length of the unit voxel, the data of multiple scene elements are respectively converted into voxel data. Specifics are as follows:
For an OBJ (3D model) file, the OBJ file may be directly converted into voxel data (a voxel module) with the help of a “read_triangle_mesh” (read_triangle_mesh) function and a “create_from_triangle_mesh” (create_from_triangle_mesh) function in an open source library OPEN3D. OPEN3D is not an open source library limited by the present invention, and other open source libraries that can implement the above two functions can also be used for data conversion.
For the vegetation, according to a size of a collision body of the vegetation and the side length of the voxel, the number of voxel modules that the vegetation occupies in the voxel world and a shape of the voxel module need to be directly obtained by calculation.
The terrain occupies only one layer in the voxel world. Referring to an area D in
So far, all the data for building the voxel scene has been obtained, and finally the voxel modules of all scene elements need to be spliced to obtain the voxel scene of all scene elements.
The voxel scene is represented as a large number of three-dimensional coordinate points in the program. The principle of splicing the voxel modules is actually to integrate the coordinate point information representing the voxel modules into the same data structure according to the relative positions of the voxel modules in the game map. However, the UE4 game engine contains position information and rotation information of each module. Therefore, when splicing is performed, special attention needs to be paid to a problem that rules of Euler angle rotation transformation of the 3D model are inconsistent in different systems.
After splicing, the voxel scene shown in
Preferably, based on that the pixel scene is converted into a voxel scene, a target space range of the target object is clipped, and the target space range is used as an input of the neural network. The spatial feature is obtained by using a neural network, and the spatial feature includes information included in the space, for example, whether voxel points within the target space range are solid points. The target object can be a game object or a scene element. The game object can be a character controlled by a player or a character controlled by AI.
Specifically, a voxel area within the target space range of the target object is obtained by clipping in the voxel scene, and the voxel area is used as an input of the neural network in the form of a three-dimensional tensor to obtain spatial features of the target object.
The target space range includes a first spatial feature range, a second spatial feature range . . . the Nth spatial feature range. The target space range may be one of the spatial feature ranges, or a combination of multiple spatial feature ranges. The present invention provides a preferred embodiment, and three spatial feature ranges are included. The first spatial feature range is a surrounding space range, that is, the range of adjacent spaces around the target object. The second spatial feature range is a near front space range, that is, a space range within a certain distance in front of the target object. The third space feature range is a far front space range, that is, a space range beyond a certain distance in front of the target object. Both the near front space range and the far front space range belong to the front space range. In conclusion, it is obvious that the target space range does not include a space range of the target object itself. Specific application examples of the above surrounding space range, the front space range, the near front space range, and the far front space range are as follows:
In this example, the target object can be understood as a character in the game scene. During the game, the character needs to know the equipment that can be picked up around the character, that is, environment information around the character needs to be captured, and equipment information is extracted and identified from the environment information. The environment information is usually limited in a preset range. For example, when there is equipment information within two meters around the character, it is notified that there is equipment that can be picked up around the character.
In this case, the size of the cube is corresponding to a value of “two meters”.
The reason why “corresponding” rather than “equal” is described is that the size of screen data of the game scene may not be exactly the same as the size actually perceived by the character, and there may be a corresponding relationship of enlargement or reduction.
In this example, the target object can also be understood as a character in the game scene. The character needs to recognize a behavior action of an enemy in front of the field of vision, so that a behavior selection of the character is made according to the behavior action, for example, action dodge in a fighting game; for another example, missile dodge in a scene game. That is, environment information in front of the field of vision of the character and enemy behavior information need to be captured, and an action that can be cooperated with or avoided is extracted or identified from the environment information and the enemy behavior information within the environmental range. The environment information and enemy behavior information are usually limited in a preset range. For example, when there is missile information within 50 meters in front of the field of vision of the character, the character is informed that relative dodge needs to be performed or another defensive action needs to be adopted. In this case, the length dimension of the cuboid is corresponding to the value of “fifty meters”.
In this example, the target object is also a character in the game. The character who holds a gun needs to use a telescope sight to obtain a distant view. Therefore, the environment range that needs to be extracted in this case should be farther to be corresponding to a distant aiming object. In this case, a length dimension of the slender cuboid is corresponding to a distance between the distant aiming object and the character.
When setting is made, the length dimension of the slender cuboid can be set to be corresponding to the farthest distance that the telescope sight can see, so that it can ensure that the clipped area can meet the requirement of the sight distance of the telescope sight.
In addition to directly applying voxel data to the input of the neural network, because the format of the voxel data is standardized and tidy, and calculation, especially parallel calculation, is easily performed, the voxel data is applied to ray detection by GPU in the present invention, and a depth map is generated through ray detection, and then the depth map is used as feature extraction of the neural network for inputting, so that the performance of voxel data can be better utilized.
Specifically, for the specified target object and field of view, the voxel data in the voxel scene is used to obtain a depth map with a resolution of M*N through ray detection, and the depth map is used as an input of the neural network to obtain spatial features of the target object.
Preferably, the generation of the depth map specifically includes the following:
It should be noted that the viewing cone is an abstract conical spatial range where the line of sight is emitted from the eyes of a “person” as one of the characters in the game scene. The viewing cone is usually a plane (a surface with a curvature of 0). In the voxel scene, for the convenience of calculation, calculation is usually based on a spherical coordinate system. In the spherical coordinate system, the viewing cone is a curved surface.
During a ray detection process, along a direction from the starting point to the ending point (it can also be understood as from near to far), if a solid point is detected for the first time at a certain position, it indicates that the path is blocked at this position, and a depth of a field of view ends at this position. In this way, a depth map is formed.
Preferably, specific steps of ray detection are as follows:
A path between the starting point (x1, y1, z1) and the ending point (x2, y2, z2) is on the same straight line. For ray detection, a ray is emitted from the starting point (x1, y1, z1) to the ending point (x2, y2, z2), and a ray path is formed between two points.
During the ray detection process, if a solid point is detected, that is, an attribute value of the voxel coordinate point is 1, it indicates that the path is blocked or not passable; if there is no solid point, that is, an attribute value of the voxel coordinate point is 0, it indicates that the path is unblocked and passable. Passage can be understood as passage in a spatial sense of the character, and can also be understood smoothness of signal transmission.
For example, during the game, there is a need to determine whether the character can hit a distant target with a gun. The starting point of detection is the character, and the ending point is the distant target. If the detection result is that there is no obstacle, it indicates that under the current aiming path, the distant target can be hit by normal shooting. If the detection result is there is an obstacle, it indicates that under the current aiming path, the distant target cannot be hit by normal shooting, either.
The following provides an example of using voxel data to obtain a depth map with a resolution of 320*180 for a pair of a specified character and a field of view:
Because there is a need to detect where there is a blockage in the ray detection process of obtaining the depth map, that is, the depth is obtained and the detection is stopped, and it can also be understood as a pixel map.
By determining different game targets and fields of view, different spatial features can further be extracted and input into the neural network to perform spatial object segmentation, spatial object classification and behavior reinforcement learning.
In addition, the method further includes: determining whether a behavior of the game object is fraudulent.
The game processing process of the game object is carried out in a voxel scene. The voxel scene includes a game object and a field of view scene of the game object. The game object moves and executes a decision-making behavior in a field of vision scene. The field of view scene of the game object is a collection of scene elements within the field of view scene of the game object. The data of the voxel scene is processed by a server and a client, and the server and the client are connected in a wireless or wired manner. The client can be a device such as a mobile phone, a tablet computer, and a computer, and the server is a server or a server group. Usually each client correspondingly operates a game object. However, for some game types, a client may correspondingly operate multiple game objects. A voxel scene is a three-dimensional space composed of unit voxels. In this three-dimensional space, various spatial elements (such as terrain, buildings, game characters, and equipment props) are constructed by using the data format of unit voxels.
The method for determining the behavior of the game object of the present invention mainly includes performing a preventive behavior and a halfway blocking behavior on a game fraud behavior. Specifics are as follows:
For an idea of preventing the fraud behavior in this embodiment, all the relevant game data of the game objects (that is, player data) is placed on the server side, the game data of each game object is calculated on the server side, and then the game data is delivered to the client, the game data received by the client is only data that the game object should receive, and there is no data that the game object should not receive.
It can be understood as follows (not limited to): When a game session is in progress, character data of this game object, character data of another game object, data of the field of view scene, map data, configuration data, etc. are required. If all the above game data are delivered to the client, and calculation is separately performed at each client to implement the progress of the game, there is a risk that a user maliciously obtains the above data and destroys the rules of the game. For example, a user obtains a perspective function by using a cheating program on the client, and another game object behind an obstacle can be seen in the field of view of the game object of the user. However, if all the above data are placed on the server side, calculation is performed on the server side, and the data actually required by each game object is sent to a corresponding client, and unnecessary data is not delivered, the user can be prevented from maliciously obtaining and using the data, thereby avoiding the occurrence of game fraud from the source, that is, even if the user uses the cheating program on the client, the game object behind the obstacle cannot be seen in the field of vision, because the data of the game object behind the obstacle is not delivered to the client.
It should be noted that the game data delivered by the server to the client mentioned in the present invention is dynamic data required to construct the overall game scene, but not all the data of the overall game scene. It can be understood that the game data includes data that can change dynamically, such as other game objects and game props that can be picked up; does not include existing scene data that does not change, such as terrain and vegetation. The size of the dynamic data is far smaller than the total data size of the overall game scene. In this way, load on communication data between the server and the client can be reduced, and real-time data communication can be supported during the running of the game.
In addition to the obtaining of data, the game fraud behavior further includes a behavior of using normal data to conduct abnormal decision-making operations during the game. For example, under normal operating conditions, there is an obstacle between this game object and another game object, and consequently it is difficult to achieve a successful shooting action, or a distance between this game object and another game object is too large, and consequently it is difficult to achieve a successful shooting action. However, during the game in which the fraud behavior occurs, this game object has finished shooting at another game object.
It should be noted that the game object mentioned in the present invention can be understood as a game object operated by the currently monitored client, while another game object can be understood as a game object operated by another client, an AI game object of the game server, or a non-player object, etc. However, the game object can be a game character, or a game element other than the game character, such as a vehicle, an airdrop, and equipment.
For the above game fraud behavior, the present invention further proposes an anti-fraud method that prevents the game fraud behavior halfway, that is, during the game process, the decision-making behavior of the game object is monitored in real time, and it is determined whether a current game environment supports the decision-making behavior, and if the current game environment does not support the decision-making behavior, it is considered that there is a game fraud behavior. Generally, the anti-fraud method that prevents the game fraud behavior halfway is also carried out on the server side, but it does not exclude that the anti-fraud method that prevents the game fraud behavior halfway can also be carried out on the client side when the client side also obtains relevant permissions and is configured with relevant modules. For example, the field of view scene of this game object can be used as a part of the game scene to participate in the determining, the field of view scene is a game scene that the user can observe when operating the game object on the client, and if the game object performs a game action (such as shooting) on another game object that can be seen in the game scene, this game action that is, the decision-making behavior meets the occurrence condition. However, if there is an obstacle in the field of view scene, another game object behind the obstacle cannot be observed in the field of view scene of this game object, so that the game operation on the another game object is not supported, that is, the occurrence condition is not met, and if the game operation on the another game objects that cannot be observed is detected, it is determined that there is a fraud behavior.
After the fraud behavior is determined, a game account corresponding to the game object and the fraud behavior are recorded immediately. Furthermore, the game session can be terminated immediately, or other measures can be taken according to the seriousness of the fraud behavior.
Preferably, an area range of the field of view scene is an area range of a viewing cone of the game object, and an area within the area range of the viewing cone is considered to be the area that the player (that is, a customer) should see on a client display interface. Here, the game object may be a game character in the game, and the game character has an anthropomorphic field of vision, and a range of the field of view of the game character can be observed from a first perspective or a third perspective on the client.
Further, in the area of the viewing cone, if it is determined, through ray detection, that there is a blockage in a path between the game object and another game object, then such another game object is not displayed.
For example, if there is another game object in the bushes within the field of vision of this game object, this game object should not be able to see the another game object.
For another example, if this game object is outside the house and another game object is in the house, this game object outside the house should not be able to see the another game object in the house.
Preferably, the decision-making behavior of the game object includes a shooting behavior, a hitting behavior, a healing behavior, and so on.
Regarding the shooting behavior, in the present invention, it is determined, through ray detection, whether the path between the game object and another game objects is smooth, if the path is smooth, and a distance between the game object and another game object meets a shooting condition, it is considered that the shooting condition is met, and a shooting action can be completed; otherwise, if there is a blockage in the path between this game object and another game object or a distance between this game object and another game object does not meet a shooting condition, a shooting (hitting) behavior should not occur.
Usually, the determining of a decision-making behavior is determining performed only after the occurred decision-making behavior has occurred and when the decision-making behavior is detected. Therefore, after it is determined that the current game environment does not support the decision-making behavior, it can be determined that the decision-making behavior of the player is fraudulent.
Regarding the hitting behavior, whether a condition is met is usually determined according to a distance between the game object and another game object. If the distance between the game object and another game object exceeds a distance supported by the hitting behavior, it is considered there is a fraud.
Preferably, the ray detection process based on the voxel scene includes:
Indexing can be understood as searching. When it is determined whether n points are solid points through detection, the n points first need to be “reached”, and indexing is the step of “reaching”. It can be understood as a proprietary step in computer processing.
In the world of voxel data, 1 represents a solid point, and 0 represents a hollow point. A representation of an object in the world of voxel data is a series of 1 at certain positions. To write an object means to write 1 to multiple certain positions in the world of voxel data; to erase an object means to write 0 to multiple certain positions in the world of voxel data.
In addition, the present invention applies voxel data to GPU ray detection, so that a high-speed multi-path concurrent operation can be implemented. It can be understood that, in the GPU, the time to detect multiple rays by using the voxel data is the same as the time to detect one ray. Therefore, the time to detect multiple rays can be greatly shortened. In this way, the calculation load on the server side is greatly reduced, so that it becomes possible to centrally place the calculation of game data of players on the server side.
In another embodiment, the voxel scene can further be another application scenario. In this another application scenario, the scene elements not only include terrain, vegetation, buildings and outdoor decorations, or may be other scene elements different from terrain, vegetation, buildings and outdoor decorations. Correspondingly, data export of the other scene elements and conversion of the voxel data may adopt a method different from this embodiment. This is not limited herein.
It should be noted that it does not mean that voxel data can only be used for calculations in the GPU. In a CPU, voxel data can still be used for calculation. However, when ray detection is performed, parallel calculation cannot be performed, and multiple-path detection tasks can only be executed sequentially.
The following provides an example of parallel implementation of single-path ray detection by a GPU:
A point in voxel data is found by indexing according to the coordinates in the voxel scene, and it is detected whether the point is a solid point. If a solid point is detected, 1 is written to a result, that is, a first detection result, and it indicates that a path corresponding to the thread is blocked. If no solid point is detected, 0 is written to a result, that is, a second detection result, and it indicates that a path corresponding to the thread can be passed without an obstacle.
The following provides an example of a GPU implementing multiple-ray detection in parallel:
In the present invention, measurement is performed on the processor RTX3090 and AMD threadriper 3990X: when 1 million or more ray detections are simultaneously performed, a detection speed of the GPU is about 550 times faster than a single-core CPU, and a computing speed is significantly improved.
In the voxel scene, dynamic programming is usually required to calculate a navigation task, and an output result is a path. In a dynamic programming task, multiple nodes to be explored are included. It should be noted that this node is a node in the sense of the task process, and does not represent a specific reference in the data. In some embodiments, the node may be a specific voxel scene coordinate point. For dynamic programming, the nodes are sequentially explored. When the nodes are explored, it is unknown whether ray detection is required and ray detection needs to be performed on which targets, and therefore a time sequence task is formed. In the present invention, a parallelization time dispersion task can be performed by using the GPU:
Because the storage space occupied by storing the ray detection results of the specified detection area is very small, one result occupies one byte at most, and 1 million detection results can be stored in 1 MB, efficiency of executing the task by using the dynamic programming algorithm in the present invention is relatively high, and only a reading time is needed.
Preferably, there is also a need to simultaneously perform dynamic programming on multiple detection subjects, and the detection subject includes multiple detection targets, that is, simultaneously perform ray detection on x*y paths of the detection areas corresponding to different detection targets, and detection results are saved, and a detection result table is formed by division of detection areas. When a detection area is dynamically planned, a detection result of the detection area can be retrieved from the detection result table.
For example, if dynamic programming needs to be performed for an area A, an area B, an area C, and an area D, ray detection is simultaneously performed on x*y paths in each area, and the detection time is only a single ray detection time t.
Detection results are saved, a detection result of the area A is a, a detection result of the area B is b, a detection result of the area C is c, a detection result of the area D is d, and a detection result table of A-a, B-b, C-c, D-d is formed. When dynamic programming is performed on the area A, the result a is retrieved.
Regardless of whether there are multiple detection targets or multiple detection subjects, the ray detection time is only a single ray detection time t. Reading is performed in time subsequently in the actual process of dynamic programming, and the computing speed is very fast.
Preferably, the game object often has a displacement during the game, that is, positions of the game object are different at different times. The game object is called a dynamic game object here, and voxel data of a dynamic game object in the field of view scene needs to be refreshed according to a preset cycle. The field of view scene that game object should obtain at the current time is calculated on the server side, including:
The coordinates of the contour points are the coordinates of the contours that make up the object, and the writing voxel data of the object is to write 1 at the positions of multiple contour points.
For example, if coordinates of the center point of an airdrop in the game are (10.10.30), and coordinates of one of the contour points of the airdrop are (0.20.50), offset coordinates of the contour point relative to the center point are (−10.10.20). If coordinates of the center point of the dynamic game object after refreshing are (10.10.25), coordinates of the contour point after refreshing is (0.20.45). 1 is written at the position (0.20.45) of the refreshed contour point, and 0 is written at the original position (0.20.50). If multiple contour points that make up the airdrop are simultaneously refreshed, the refreshing of the position of the airdrop is implemented.
By using voxel data, even global erasing and writing can be implemented in parallel on the GPU, and real-time refreshing of the voxel scene can be completed at a high speed at the level of 10 microseconds (0.01 milliseconds). In other embodiments of the present invention, all dynamic game objects in the entire game may be refreshed according to a preset period, and corresponding data are delivered to different clients according to the field of view scenes of different game objects.
The dynamic game objects can be all game objects that may move, such as game characters, vehicles, and airdrops.
Preferably, in addition to determining decision-making behaviors such as shooting behaviors, hitting behaviors, and healing behaviors, fraud behaviors can further be determined by determining whether the route taken by the game object and the traveling manner are compliant. Specifics are as follows:
For example, the game object is on flat ground, and the destination is a house on a cliff. If a straight line (that is, a navigation path) is taken, the destination needs to be reached by using the equipment to jet (fly) (that is, a travelling manner); if a detour is taken to go a curve (that is, a navigation path), the destination can be reached by walking (that is, a traveling manner). That is: the straight-line path only supports a traveling manner of jetting (flying), and the curved path supports traveling manners such as walking, jumping, and jetting (flying). If it is detected that the game object reaches the house by walking in a straight line and does not have equipment to jet (fly), it can be determined that there is a fraud behavior for the game object.
For another example, this game object is by the river, and the destination to go is the other side of the river. The game sets that the river is relatively deep, only taking water vehicles, or jetting (flying) is supported to go to the other side of the river, and passing without using any water vehicle or equipment is not supported by the character. If it is detected that the game character passes through the river without using any water vehicle or equipment, it is considered that there is a fraud behavior.
For another example, if it is set that the game object consumes an energy value of s after moving a certain distance, but it is actually detected that the energy value consumed by the game object after moving a certain distance is less than s, it is considered that there is a fraud behavior for the game object.
Preferably, obtaining of the navigation path and the traveling manner between the game object and the destination by calculation is based on the layer data of the voxel scene and the connection data between the layer data and the layer data, and specifically includes the following:
The present invention further discloses a data processing system based on voxel data. Original data of a game scene constitutes a pixel scene, and the pixel scene includes multiple different data types of scene elements. The system includes:
The system includes a hardware structure and a computer-readable storage medium. The above functional modules may be integrated on the hardware structure or may be integrated on the computer-readable storage medium. This is not limited herein. Moreover, the connection relationship of the above functional modules may be a tangible connection or an intangible cross-region connection. This is not limited herein. In addition, the system and corresponding method embodiments belong to the same idea. For details of a specific implementation process, refer to the corresponding method embodiments. Details are not repeated herein.
The present invention further discloses a data processing server based on voxel data. Original data of a game scene constitutes a pixel scene, the pixel scene includes multiple different data types of scene elements, and the server includes:
In addition, the server and corresponding method embodiments belong to the same idea. For details of a specific implementation process, refer to the corresponding method embodiments. Details are not repeated herein.
The present invention further discloses a computer-readable storage medium for storing a data processing instruction based on voxel data. Original data of a game scene constitutes a pixel scene, and the pixel scene includes multiple different data types of scene elements, and the following steps are implemented when the instruction is executed:
The computer-readable storage medium can be integrated in hardware, and when the hardware is run, the computer-readable storage medium can be supported to read and run.
In addition, the computer-readable storage medium and the corresponding method embodiments belong to the same idea. For details of a specific implementation process, refer to the corresponding method embodiments. Details are not repeated herein.
The present invention further discloses a computer program product, including a computer executable instruction, and the instruction is executed by a processor to implement the following steps:
In addition, the computer program product and the corresponding method embodiments belong to the same idea. For details of a specific implementation process, refer to the corresponding method embodiments. Details are not repeated herein.
It should be noted that the embodiments of the present invention have better implementations and do not limit the present invention in any form. Any person skilled in the art may use the technical content disclosed above to change or modify equivalent effective embodiments. However, any amendments or equivalent changes and modifications made to the above embodiments based on the technical essence of the present invention without departing from the content of the technical solution of the present invention still fall within the scope of the technical solution of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
202111160564.8 | Sep 2021 | CN | national |
202111160592.X | Sep 2021 | CN | national |
202111163612.9 | Sep 2021 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2022/122494 | 9/29/2022 | WO |