METHODS AND APPARATUSES FOR RENDERING A HAIR VIRTUAL MODEL, COMPUTER DEVICES, AND STORAGE MEDIA

Information

  • Patent Application
  • 20250078374
  • Publication Number
    20250078374
  • Date Filed
    September 15, 2022
    2 years ago
  • Date Published
    March 06, 2025
    4 months ago
Abstract
A method for rendering a hair virtual model includes: obtaining particle attribute information of a hair particle in a hair particle model simulating a target hair rendering effect; obtaining a hair particle order parameter indicating an order of respective distances of each hair particle from a specified reference point in a model space; determining a target hair particle of the hair particle based on the hair particle order; and obtaining a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region to obtain a rendered hair virtual model.
Description

This application claims priority to Chinese Patent Application No. 202210488443.4, filed in the Chinese Patent Office on May 6, 2022, and entitled “METHODS AND APPARATUSES FOR RENDERING A HAIR VIRTUAL MODEL, COMPUTER DEVICES, AND STORAGE MEDIA”. The disclosures of the above application are incorporated herein by reference in their entireties.


TECHNICAL FIELD

The present disclosure relates to computer image processing technologies of a computer, and more particularly, to methods and apparatuses for rendering a hair virtual model, computer devices, and storage media.


BACKGROUND

With the continuous development of computer communication technology and wide popularization and application of terminals such as smartphones, tablets, and notebook computers, the terminals are developed in a diversified and personalized direction and gradually becoming indispensable terminals for people in life and work. To satisfy people's pursuit of spiritual life, entertainment games capable of operating on the terminals are emerging and more terminal games are emerging. The terminal games have become indispensable living and entertainment ways. For a user to obtain a better game embodiment, many terminal games are often constructed based on real characters and scenes. Therefore, during a game design, a game scene in a game design is expected to be closer to a real situation.


In practical game design projects, it is common to simulate human characters and animal characters, for example, to simulate hair of the human characters and hair of the animal characters. During hair simulation, to ensure that the hair can be displayed normally and avoid a display disorder of the hair due to change in a viewing angle of a game, the hair is usually rendered in a semi-transparent sorting manner and an anti-aliasing effect is realized.


Technical Problem

Some embodiments of the present disclosure provide methods and apparatuses for rendering a hair virtual model, computer devices, and storage media, which can solve a problem in the prior art that a video memory occupation of a computer device rapidly increases when a hair semi-transparent sorting rendering and an anti-aliasing effect are realized, which causes inefficiency in hair rendering.


Technical Solutions

In a first aspect, one or more embodiments of the present disclosure provide a method for rendering a hair virtual model, including:

    • obtaining particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates a target hair rendering effect;
    • obtaining a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter indicates an order of a respective distance of each hair particle from a specified reference point in a model space;
    • based on the hair particle order, determining a target hair particle of the hair particle currently involved in rendering; and
    • obtaining a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region.


In some embodiments, after obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:

    • obtaining a number of hair layer particles corresponding to the hair particle model, wherein the number of hair layer particles indicates a number of hair particles in each hair layer of the hair particle model, and the hair layer is obtained by layering the model space of the hair particle model based on the specified reference point; and
    • determining the target hair particle currently involved in rendering based on the number of hair layer particles and the hair particle order.


In some embodiments, before obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:

    • layering the model space based on a relative distance between the hair particle model and the specified reference point and a specified coordinate axis of the model space, to obtain a plurality of hair layers and a total number of the plurality of hair layers;
    • determining a number of hair layers corresponding to hair layers where all hair particles are located in the hair particle model based on the relative distance and the total number of the plurality of hair layers; and
    • determining the number of the hair layer particles and the hair particle order parameter based on the number of hair layers corresponding to the hair layers where the all hair particles are located in the hair particle model.


In some embodiments, the layering the model space based on the relative distance between the hair particle model and the specified reference point and the specified coordinate axis of the model space, to obtain the hair layers and the total number of the hair layers includes:

    • determining a first distance and a second distance between the hair particle model and the specified reference point in the model space, wherein the first distance indicates a minimum distance between the hair particle model and the specified reference point and the second distance indicates a maximum distance between the hair particle model and the specified reference point; and
    • layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers.


In some embodiments, after layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the hair layers and the total number of the hair layers, the method further includes:

    • determining a number of hair layers corresponding to a hair layer where the hair particles are located based on the first distance, the second distance, and a target distance corresponding to the hair particle indicating a distance between the hair particle and the specified reference point;
    • generating a hair layer identifier based on the number of the hair layers, and associating the hair layer identifier with the hair particle; and
    • performing a prefix sum processing based on hair layer identifiers corresponding to all hair particles in the hair particle model to obtain the number of the hair layer particles.


In some embodiments, after layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method further includes:

    • sorting all the hair particles in the hair particle model from near to far according to the respective distances of the hair particles from the specified reference point based on the hair layer identifiers respectively corresponding to the hair particles, to generate the hair particle order parameter.


In some embodiments, the method further includes:

    • classifying all the hair particles in the hair particle model based on the number of hair layer particles and the hair particle order parameter to obtain a plurality of hair particle sets, wherein ones of the hair particles in each of the hair particle sets belong to a same hair layer of the hair layers and are arranged in order;
    • generating a hair particle sorting result based on the plurality of hair particle sets; and
    • determining the target hair particle currently involved in the rendering according to the hair particle sorting result.


In some embodiments, obtaining the rendered hair virtual model by determining the hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region includes:

    • obtaining an initial hair width corresponding to the target hair particle based on the particle attribute information of the target hair particle;
    • determining a width adjustment value for the initial hair width based on a distance between the target hair particle and the specified reference point;
    • adjusting the initial hair width based on the width adjustment value to obtain an adjusted hair width;
    • determining a plurality of region rendering vertices corresponding to the hair segment rendering region based on the adjusted hair width; and
    • rendering a plurality of polygonal grids based on the plurality of region rendering vertices; and
    • generating the hair virtual model based on the plurality of polygonal grids.


In some embodiments, after generating the hair virtual model based on the plurality of polygon grids, the method further includes:

    • determining a hair rendering pixel located in the hair segment rendering region;
    • determining, based on a tangent line of the hair particle, a relative distance between the hair rendering pixel and the tangent line as a pixel distance; and
    • adjusting current transparency of the hair rendering pixel based on the pixel distance.


In some embodiments, adjusting the current transparency of the hair rendering pixel based on the pixel distance includes:

    • performing a smoothness calculation based on a pixel width of the hair rendering pixel, the pixel distance, a preset conversion ratio, and a specified constant to obtain a transparency adjustment value for the hair rendering pixel; and
    • adjusting the current transparency of the hair rendering pixel based on the transparency adjustment value to obtain an adjusted hair rendering pixel.


In a second aspect, one or more embodiments of the present disclosure further provides an apparatus for rendering a hair virtual model, including:

    • a first obtaining unit configured to obtain particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates a target hair rendering effect;
    • a second obtaining unit configured to obtain a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter indicates an order of a respective distance of each hair particle from a specified reference point in a model space;
    • a first determination unit configured to, based on the hair particle order, determine a target hair particle of the hair particle currently involved in rendering; and
    • a second determination unit configured to obtain a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region.


In some embodiments, the apparatus further includes:

    • a first obtaining subunit configured to obtain a number of hair layer particles corresponding to the hair particle model, wherein the number of hair layer particles indicates a number of hair particles in each hair layer of the hair particle model, and the hair layer is obtained by layering the model space of the hair particle model based on the specified reference point; and
    • a first determination subunit configured to determine the target hair particle currently involved in rendering based on the number of hair layer particles and the hair particle order.


In some embodiments, the apparatus further includes:

    • a first processing unit configured to layer the model space based on a relative distance between the hair particle model and the specified reference point and a specified coordinate axis of the model space, to obtain a plurality of hair layers and a total number of the plurality of hair layers;
    • a second determination subunit configured to determine a number of hair layers corresponding to hair layers where all hair particles are located in the hair particle model based on the relative distance and the total number of the plurality of hair layers; and
    • the second determination subunit configured to determine the number of the hair layer particles and the hair particle order parameter based on the number of hair layers corresponding to the hair layers where the all hair particles are located in the hair particle model.


In some embodiments, the apparatus further includes:

    • a third determination subunit configured to determine a first distance and a second distance between the hair particle model and the specified reference point in the model space, wherein the first distance indicates a minimum distance between the hair particle model and the specified reference point and the second distance indicates a maximum distance between the hair particle model and the specified reference point; and
    • a second processing unit configured to layer the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the hair layers and the total number of the plurality of hair layers.


In some embodiments, the apparatus further includes:

    • a fourth determination subunit configured to determine a number of hair layers corresponding to a hair layer where the hair particles are located based on the first distance, the second distance, and a target distance corresponding to the hair particle indicating a distance between the hair particle and the specified reference point;
    • an association unit configured to generate a hair layer identifier based on the number of the hair layers, and associate the hair layer identifier with the hair particle; and
    • a third processing unit configured to perform a prefix sum processing based on hair layer identifiers corresponding to all hair particles in the hair particle model to obtain the number of the hair layer particles.


In some embodiments, the apparatus further includes:

    • a first generation unit configured to sort all the hair particles in the hair particle model from near to far according to the respective distances of the hair particles from the specified reference point based on the hair layer identifiers respectively corresponding to the hair particles, to generate the hair particle order parameter.


In some embodiments, the apparatus further includes:

    • a fourth processing unit configured to classify all the hair particles in the hair particle model based on the number of hair layer particles and the hair particle order parameter to obtain a plurality of hair particle sets, wherein ones of the hair particles in each of the hair particle sets belong to a same hair layer of the hair layers and are arranged in order;
    • a second generation unit configured to generate a hair particle sorting result based on the plurality of hair particle sets; and
    • a fifth determination subunit configured to determine the target hair particle currently involved in the rendering according to the hair particle sorting result.


In some embodiments, the apparatus further includes:

    • a second obtaining subunit configured to obtain an initial hair width corresponding to the target hair particle based on the particle attribute information of the target hair particle;
    • a sixth determination subunit configured to determine a width adjustment value for the initial hair width based on a distance between the target hair particle and the specified reference point;
    • a first adjustment unit configured to adjust the initial hair width based on the width adjustment value to obtain an adjusted hair width;
    • a sixth determination subunit configured to determine a plurality of region rendering vertices corresponding to the hair segment rendering region based on the adjusted hair width;
    • a rendering unit configured to render a plurality of polygon grids based on the plurality of region rendering vertices; and
    • a third generation unit configured to generate the hair virtual model based on the plurality of polygonal grids.


In some embodiments, the apparatus further includes:

    • a seventh determination subunit configured to determine a hair rendering pixel located in the hair segment rendering region;
    • the seventh determination subunit further configured to determine, based on a tangent line of the hair particle, a relative distance between the hair rendering pixel and the tangent line as a pixel distance; and
    • a second adjustment unit configured to adjust current transparency of the hair rendering pixel based on the pixel distance.


In some embodiments, the apparatus further includes:

    • a calculation unit configured to perform a smoothness calculation based on a pixel width of the hair rendering pixel, the pixel distance, a preset conversion ratio, and a specified constant to obtain a transparency adjustment value for the hair rendering pixel; and
    • a third adjustment unit configured to adjust the current transparency of the hair rendering pixel based on the transparency adjustment value to obtain an adjusted hair rendering pixel.


In a third aspect, one or more embodiments of the present disclosure further provide a computer device, including a processor and a memory storing a computer program and a processor, wherein when calling the computer program in the memory, executable by the processor to perform the method for rendering the hair virtual model.


In a fourth aspect, one or more embodiments of the present disclosure further provide a storage medium storing a computer program, and when executed by a processor, the computer program implements the method for rendering the hair virtual model.


Beneficial Effects

Some embodiments of the present disclosure provide methods and apparatuses for rendering a hair virtual model, computer devices, and storage media, which: obtain particle attribute information of a hair particle in a hair particle model; obtain a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter is used to indicate a distance order indicating a respective distance between each particle and a designated reference point in a model space; then determines, based on the hair particle order, a target hair particle of the hair particle currently involved in rendering; and finally obtain a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region. The embodiments of the present disclosure generate a hair particle model through simulation, and split hair into hair particles, and render the hair particles, which reduces video memory occupancy of a computer device and improves hair rendering efficiency. Moreover, by adjusting a pixel distance at a hair edge, it is possible to improve colour relaxation and an anti-aliasing effect of the hair during hair rendering, thereby improving authenticity of target hair.





BRIEF DESCRIPTION OF THE DRAWINGS

To describe technical solutions in some embodiments of the present disclosure more clearly, accompanying drawings required for describing the embodiments will be introduced briefly. It is apparent that the accompanying drawings in the following description are merely embodiments of the present disclosure, and other drawings may be obtained according to these drawings by those skilled in the art without involving any inventive effort.



FIG. 1 is a schematic flowchart of a method for rendering a hair virtual model according to one or more embodiments of the present disclosure.



FIG. 2 is a schematic diagram illustrating an application scenario of the method for rendering a hair virtual model according to one or more embodiments of the present disclosure.



FIG. 3 is a schematic structural diagram of an apparatus for rendering a hair virtual model according to one or more embodiments of the present disclosure.



FIG. 4 is a schematic structural diagram of a computer device according to one or more embodiments of the present disclosure.





DETAILED DESCRIPTION

Technical solutions in the embodiments of the present disclosure will be clearly and completely described below in connection with the accompanying drawings in the embodiments of the present disclosure. It is apparent that the described embodiments are only part of embodiments of the present disclosure, and not all embodiments thereof. Based on the embodiments in the present application, all other embodiments obtained by a person having skill in the art without any inventive effort fall within the protection scope of the present application.


Some embodiments of the present disclosure provide methods and apparatuses for rendering a hair virtual model, computer devices, and storage media. Specifically, the methods for rendering the hair virtual model of the embodiments of the present disclosure may be performed by a computer device, which may be a terminal device. The terminal may be a smartphone, a tablet computer, a notebook computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), or the like.


For example, when the methods for rendering the hair virtual model is run on the terminal device, the terminal device provides a graphical user interface to a user through a plurality of ways. For example, the graphical user interface may be rendered for display on a display screen of the terminal device, or may be presented by holographic projection. For example, the terminal device may include a touch screen and a processor. The touch screen may be configured to display the graphical user interface and to receive operation instructions generated by the user acting on the graphical user interface. The graphical user interface includes a display screen. The processor is configured to generate the graphical user interface, respond to the operation instruction, and control a display of the graphical user interface on the touch display.


Some embodiments of the present disclosure provide methods and apparatuses for rendering a hair virtual model, computer devices, and storage media, which will be described in detail below. It should be noted that the order in which the following embodiments are described is not intended to limit a preferred order of the embodiments.


Referring to FIG. 1, FIG. 1 is a schematic flowchart of a method for rendering a hair virtual model according to one or more embodiments of the present disclosure. A specific flow may be operations 101 to 104 as follows.


In operation 101, particle attribute information of a hair particle in a hair particle model is obtained. The hair particle model simulates a target hair rendering effect.


In the embodiments of the present disclosure, the particle attribute information includes a location of the hair particle in a model space and a tangential attribute of the hair particle.


In operation 102, a hair particle order parameter corresponding to the hair particle model is obtained. The hair particle order parameter indicates an order of respective distances of hair particles from a specified reference point in a model space.


In an embodiment, after the operation of obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:

    • obtaining a number of hair layer particles corresponding to the hair particle model, wherein the number of hair layer particles indicates a number of hair particles in each hair layer of the hair particle model, and the hair layer is obtained by layering the model space of the hair particle model based on the specified reference point; and
    • determining the target hair particle currently involved in rendering based on the number of hair layer particles and the hair particle order.


In another embodiment, before the operation of obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:

    • layering the model space based on a relative distance between the hair particle model and the specified reference point and a specified coordinate axis of the model space, to obtain a plurality of hair layers and a total number of the plurality of hair layers;
    • determining a number of hair layers corresponding to hair layers where all hair particles are located in the hair particle model based on the relative distance and the total number of the plurality of hair layers; and
    • determining the number of the hair layer particles and the hair particle order parameter based on the number of hair layers corresponding to the hair layers where the all hair particles are located in the hair particle model.


Specifically, the operation of layering the model space based on the relative distance between the hair particle model and the specified reference point and the specified coordinate axis of the model space, to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method may include:

    • determining a first distance and a second distance between the hair particle model and the specified reference point in the model space, wherein the first distance indicates a minimum distance between the hair particle model and the specified reference point and the second distance indicates a maximum distance between the hair particle model and the specified reference point; and
    • layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the hair layers and the total number of the plurality of hair layers.


Further, after the operation of layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method may include:

    • determining a number of hair layers corresponding to a hair layer where the hair particles are located based on the first distance, the second distance, and a target distance corresponding to the hair particle indicating a distance between the hair particle and the specified reference point;
    • generating a hair layer identifier based on the number of the hair layers, and associating the hair layer identifier with the hair particle; and
    • performing a prefix sum processing based on hair layer identifiers corresponding to all hair particles in the hair particle model to obtain the number of the hair layer particles.


In another embodiment, after the operation of layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method may include:

    • sorting all the hair particles in the hair particle model from near to far according to the respective distances of the hair particles from the specified reference point based on the hair layer identifiers respectively corresponding to the hair particles, to generate the hair particle order parameter.


Optionally, in one or more embodiments of the present disclosure, all the hair particles in the hair particle model may be classified based on the number of hair layer particles and the hair particle order parameter to obtain a plurality of hair particle sets. Ones of the hair particles in each of the hair particle sets belong to a same hair layer of the hair layers and are arranged in order. A hair particle sorting result may be generated based on the plurality of hair particle sets.


The target hair particle currently involved in the rendering may be determined according to the hair particle sorting result.


In operation 103, a target hair particle of the hair particle currently involved in rendering is determined based on the hair particle order.


In operation 104, a rendered hair virtual model is obtained by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region.


In order to make target hair more realistic, the operation of obtaining the rendered hair virtual model by determining the hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region includes:

    • obtaining an initial hair width corresponding to the target hair particle based on the particle attribute information of the target hair particle;
    • determining a width adjustment value for the initial hair width based on a distance between the target hair particle and the specified reference point;
    • adjusting the initial hair width based on the width adjustment value to obtain an adjusted hair width;
    • determining a plurality of region rendering vertices corresponding to the hair segment rendering region based on the adjusted hair width; and
    • rendering a plurality of polygonal grids based on the plurality of region rendering vertices; and
    • generating the rendered hair virtual model based on the plurality of polygonal grids.


Further, after the operation of generating the hair virtual model based on the plurality of polygonal grids, the method may include:

    • determining a hair rendering pixel located in the hair segment rendering region;
    • determining, based on a tangent line of the hair particle, a relative distance between the hair rendering pixel and the tangent line as a pixel distance; and
    • adjusting current transparency of the hair rendering pixel based on the pixel distance.


In order to improve an anti-aliasing effect of the target hair, the operation of adjusting the current transparency of the hair rendering pixels based on the pixel distance may include:

    • performing a smoothness calculation based on a pixel width of the hair rendering pixel, the pixel distance, a preset conversion ratio, and a specified constant to obtain a transparency adjustment value for the hair rendering pixel; and
    • adjusting the current transparency of the hair rendering pixel based on the transparency adjustment value to obtain an adjusted hair rendering pixel.


In a summary, the embodiments of the present disclosure provide method for rendering a hair virtual model. On one hand, the method includes: obtaining particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates a target hair rendering effect; obtaining a hair particle order parameter corresponding to the hair particle model; determining, based on the hair particle order, a target hair particle of the hair particle currently involved in rendering; and obtaining a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region. By generating the hair particle model through simulation, splitting hair into hair particles, and renders the hair particles, a semi-transparent effect of the target hair is achieved accurately. Moreover, since according to the embodiments of the present disclosure, there is no need to create a linked list for each pixel, computing resource consumption of the hair rendering is reduced, thereby reducing video memory occupancy of a computer device and improves hair rendering efficiency. On the other hand, the embodiments of the present disclosure may further expand each hair particle along a direction perpendicular to a tangent line of hair. By adjusting a pixel distance at a hair edge to adjust current transparency of a hair rendering pixel, it is possible to improve colour relaxation and an anti-aliasing effect of the hair during hair rendering, thereby improving authenticity of the target hair.


Specifically, some embodiments of the present disclosure further provide a specific application method for rendering a hair virtual model. The specific method may be as follows.


(1) A minimum distance Min and a maximum distance Max of the hair particles in the hair particle model from a virtual camera (i.e., the specified reference point) can be estimated at a Central Processing Unit (CPU) end. Then, a model space between the minimum distance Min and the maximum distance Max is divided into a specified number of divisions (LayerCount) of hair layers (Layer). Thereafter, the hair particles are placed in different Layers according to a distance between each of the hair particles and the specified reference point to achieve approximate ordering. In some embodiments of the present disclosure, the specified number of divisions may be 1024, 2048, etc., and may be adjusted according to actual conditions. For example, the minimum distance Min and the maximum distance Max of the hair particle model may be calculated to be 20 cm and 40 cm, respectively. In some embodiments of the present disclosure, the model space between 20 cm and 40 cm may be divided into 1024 hair layers, (40−20)/1024 cm is a thickness of each hair layer, and each hair layer represents a distance interval.


(2) An empty Buffer for storing data as LayerCount may be constructed at a GPU end and the empty Buffer is named as CounterBuffer. The CounterBuffer is used to store a number of hair particles in each hair layer. Further, it is also possible to construct, at the GPU end, a Buffer having a length of a total number (ParticleCount) of hair particles in the hair particle model, and name the Buffer as RenderKeyBuffer. The RenderKeyBuffer is used to record hair particle identifiers whose distance from the specified reference point is from near to far.


(3) A position Xi of a hair particle i in the model space can be read in parallel at the GPU side, and a distance between the hair particle and the virtual camera is calculated, and then the number of hair layers corresponding to the hair particle is calculated according to a hair particle identifier calculation formula as below, and then an atom of CounterBuffer[LayerId] is incremented by one.








Number


of


hair


layers

=



(

dis
-
min

)


(

max
-
min

)


×
Specified


number


of


division


,




where dis is a distance between the hair particle and the virtual camera, min is the minimum distance of the hair particles from the virtual camera (i.e., the specified reference point), and max is the maximum distance of the hair particles from the virtual camera (i.e., the specified reference point).


(4) After the number of hair layers corresponding to each hair particle in the hair particle model is obtained, parallel prefix sum calculation is performed on the CounterBuffer at the GPU end to obtain a total number of hair particles in each hair layer.


(5) After the number of hair layers corresponding to each hair particle in the hair particle model is obtained, [LayerId] in the CounterBuffer is recorded as WritingPoint, and the atom of the CounterBuffer[LayerId] is incremented by one. When RenderKeyBuffer[WritingPoint] is set to i, Ids of HairParticle from near to far are stored in the RenderKeyBuffer.


(6) Referring to FIG. 2, in a VertexShader, a target hair particle in the RenderKeyBuffer, which actually needs to be rendered, is read based on a rendered hair identifier, and a position of each rendering vertex of the target hair particle is obtained based on a position of the target hair particle in the model space and a tangential attribute of the target hair particle. Then, in the VertexShader, based on a distance of the target hair particle from the virtual camera, the position of each rendering vertex of the target hair particle is determined to correspond to a PixelWidth. A target width of the target hair particle is obtained by adding a hair width and the PixelWidth.


(7) In a PixelShader, the PixelWidth and a PixelDistance are determined, respectively. A coverage degree of the hair on a current pixel is calculated according to a smoothness calculation formula to calculate a transparency value (Alpha) of the current pixel. Finally, Alpha mixing is performed on pixels through GPU hardware.





Smoothstep(0,1,PixelWidth−lengthtopixel×(PixelDistance)+0.5),


where LenghtToPixel is a conversion ratio of a world space length to a pixel space length under a current depth, and 0.5 is an empirical value, which can be replaced according to actual situations.


Specifically, for y=smoothstep (0, 1, x), the above [PixelWidth−lengthtopixel×(PixelDistance)+0.5] is replaced with x. If x is calculated to be less than 0, a smoothness value is 0; if x is calculated to be greater than or equal to 0 or less than or equal to 1, the smoothness value is 3x2−2x3; if x is calculated to be greater than 1, the smoothness value is 1, so that the smoothness value smoothly transitions from “0” to “1”, thereby realizing a smooth transition.


To facilitate better implementation of the method for rendering the hair virtual model of the embodiments of the present disclosure, some embodiments of the present disclosure further provide an apparatus for rendering a hair virtual model, wherein meanings of nouns are the same as in the above-mentioned method for rendering the hair virtual model. Specific implementation details may be referred to the description in the method embodiments.


Referring to FIG. 3, FIG. 3 is a schematic structural diagram of an apparatus for rendering a hair virtual model according to one or more embodiments of the present disclosure. The apparatus includes:

    • a first obtaining unit 201 configured to obtain particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates a target hair rendering effect;
    • a second obtaining unit 202 configured to obtain a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter indicates an order of a respective distance of each hair particle from a specified reference point in a model space;
    • a first determination unit 203 configured to, based on the hair particle order, determine a target hair particle of the hair particle currently involved in rendering; and
    • a second determination unit 204 configured to obtain a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region.


In some embodiments, the apparatus further includes:

    • a first obtaining subunit configured to obtain a number of hair layer particles corresponding to the hair particle model, wherein the number of hair layer particles indicates a number of hair particles in each hair layer of the hair particle model, and the hair layer is obtained by layering the model space of the hair particle model based on the specified reference point; and
    • a first determination subunit configured to determine the target hair particle currently involved in rendering based on the number of hair layer particles and the hair particle order.


In some embodiments, the apparatus further includes:

    • a first processing unit configured to layer the model space based on a relative distance between the hair particle model and the specified reference point and a specified coordinate axis of the model space, to obtain a plurality of hair layers and a total number of the plurality of hair layers;
    • a second determination subunit configured to determine a number of hair layers corresponding to hair layers where all hair particles are located in the hair particle model based on the relative distance and the total number of the plurality of hair layers; and
    • the second determination subunit configured to determine the number of the hair layer particles and the hair particle order parameter based on the number of hair layers corresponding to the hair layers where the all hair particles are located in the hair particle model.


In some embodiments, the apparatus further includes:

    • a third determination subunit configured to determine a first distance and a second distance between the hair particle model and the specified reference point in the model space, wherein the first distance indicates a minimum distance between the hair particle model and the specified reference point and the second distance indicates a maximum distance between the hair particle model and the specified reference point; and
    • a second processing unit configured to layer the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers.


In some embodiments, the apparatus further includes:

    • a fourth determination subunit configured to determine a number of hair layers corresponding to a hair layer where the hair particles are located based on the first distance, the second distance, and a target distance corresponding to the hair particle indicating a distance between the hair particle and the specified reference point;
    • an association unit configured to generate a hair layer identifier based on the number of the hair layers, and associate the hair layer identifier with the hair particle; and
    • a third processing unit configured to perform a prefix sum processing based on hair layer identifiers corresponding to all hair particles in the hair particle model to obtain the number of the hair layer particles.


In some embodiments, the apparatus further includes:

    • a first generation unit configured to sort all the hair particles in the hair particle model from near to far according to the respective distances of the hair particles from the specified reference point based on the hair layer identifiers respectively corresponding to the hair particles, to generate the hair particle order parameter.


In some embodiments, the apparatus further includes:

    • a fourth processing unit configured to classify all the hair particles in the hair particle model based on the number of hair layer particles and the hair particle order parameter to obtain a plurality of hair particle sets, wherein ones of the hair particles in each of the hair particle sets belong to a same hair layer of the hair layers and are arranged in order;
    • a second generation unit configured to generate a hair particle sorting result based on the plurality of hair particle sets; and
    • a fifth determination subunit configured to determine the target hair particle currently involved in the rendering according to the hair particle sorting result.


In some embodiments, the apparatus further includes:

    • a second obtaining subunit configured to obtain an initial hair width corresponding to the target hair particle based on the particle attribute information of the target hair particle;
    • a sixth determination subunit configured to determine a width adjustment value for the initial hair width based on a distance between the target hair particle and the specified reference point;
    • a first adjustment unit configured to adjust the initial hair width based on the width adjustment value to obtain an adjusted hair width;
    • a sixth determination subunit configured to determine a plurality of region rendering vertices corresponding to the hair segment rendering region based on the adjusted hair width;
    • a rendering unit configured to render a plurality of polygonal grids based on the plurality of region rendering vertices; and
    • a third generation unit configured to generate the hair virtual model based on the plurality of polygonal grids.


In some embodiments, the apparatus further includes:

    • a seventh determination subunit configured to determine a hair rendering pixel located in the hair segment rendering region;
    • the seventh determination subunit configured to determine, based on a tangent line of the hair particle, a relative distance between the hair rendering pixel and the tangent line as a pixel distance; and
    • a second adjustment unit configured to adjust current transparency of the hair rendering pixel based on the pixel distance.


In some embodiments, the apparatus further includes:

    • a calculation unit configured to perform a smoothness calculation based on a pixel width of the hair rendering pixel, the pixel distance, a preset conversion ratio, and a specified constant to obtain a transparency adjustment value for the hair rendering pixel; and
    • a third adjustment unit configured to adjust the current transparency of the hair rendering pixel based on the transparency adjustment value to obtain an adjusted hair rendering pixel.


Some embodiments of the present disclosure provides an apparatus for rendering a hair virtual model, wherein: a first obtaining unit 201 obtains particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates a target hair rendering effect; a second obtaining unit 202 obtains a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter indicates an order of a respective distance of each hair particle from a specified reference point in a model space; a first determination unit 203, based on the hair particle order, determines a target hair particle of the hair particle currently involved in rendering; and a second determination unit obtains a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region. On one hand, the embodiments of the present invention generate the hair particle model through simulation, split hair into hair particles, and render the hair particles, which achieves a semi-transparent effect of the target hair, reduces video memory occupancy of a computer device, and improves hair rendering efficiency. On the other hand, the embodiments of the present disclosure may further expand each hair particle along a direction perpendicular to a tangent line of hair. By adjusting a pixel distance at a hair edge to adjust current transparency of a hair rendering pixel, it is possible to improve colour relaxation and an anti-aliasing effect of the hair during hair rendering, thereby improving authenticity of the target hair.


Accordingly, some embodiments of the present disclosure further provide a computer device. The computer device may be a terminal or a server. The terminal may be a smartphone, a tablet computer, a notebook computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), or the like. As shown in FIG. 4, FIG. 4 is a schematic structural diagram of a computer device according to one or more embodiments of the present disclosure. The computer device 300 includes a processor 301 having one or more processing cores, a memory 302 having one or more computer-readable storage media, and a computer program stored on the memory 302 and operable on the processor. The processor 301 is electrically connected to the memory 302. It will be appreciated by those skilled in the art that the computer device structure shown in drawings does not constitute a limitation on the computer device, and may include more or fewer components than illustrated, or may combine some components, or has different component arrangements.


The processor 301 is a control centre of the computer device 300, connects various parts of the computer device 300 by various interfaces and lines, and performs various functions of the computer device 300 and processes data by running or loading software programs and/or modules stored in the memory 302 and invoking data stored in the memory 302, thereby monitoring the computer device 300 as a whole.


In some embodiments of the present disclosure, the processor 301 in the computer device 300 loads instructions corresponding to processes of one or more application programs into the memory 302 according to the following operations, and runs the application programs stored in the memory 302 to implement various functions:

    • obtaining particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates a target hair rendering effect;
    • obtaining a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter indicates an order of a respective distance of each hair particle from a specified reference point in a model space;
    • based on the hair particle order, determining a target hair particle of the hair particle currently involved in rendering; and
    • obtaining a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region.


Optionally, after obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:

    • obtaining a number of hair layer particles corresponding to the hair particle model, wherein the number of hair layer particles indicates a number of hair particles in each hair layer of the hair particle model, and the hair layer is obtained by layering the model space of the hair particle model based on the specified reference point; and
    • determining the target hair particle currently involved in rendering based on the number of hair layer particles and the hair particle order.


Optionally, before obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:

    • layering the model space based on a relative distance between the hair particle model and the specified reference point and a specified coordinate axis of the model space, to obtain a plurality of hair layers and a total number of the plurality of hair layers;
    • determining a number of hair layers corresponding to hair layers where all hair particles are located in the hair particle model based on the relative distance and the total number of the plurality of hair layers; and
    • determining the number of the hair layer particles and the hair particle order parameter based on the number of hair layers corresponding to the hair layers where the all hair particles are located in the hair particle model.


Optionally, the layering the model space based on the relative distance between the hair particle model and the specified reference point and the specified coordinate axis of the model space, to obtain the plurality of hair layers and the total number of the plurality of hair layers includes:

    • determining a first distance and a second distance between the hair particle model and the specified reference point in the model space, wherein the first distance indicates a minimum distance between the hair particle model and the specified reference point and the second distance indicates a maximum distance between the hair particle model and the specified reference point; and
    • layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers.


Optionally, after layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method includes:

    • determining a number of hair layers corresponding to a hair layer where the hair particles are located based on the first distance, the second distance, and a target distance corresponding to the hair particle indicating a distance between the hair particle and the specified reference point;
    • generating a hair layer identifier based on the number of the hair layers, and associating the hair layer identifier with the hair particle; and
    • performing a prefix sum processing based on hair layer identifiers corresponding to all hair particles in the hair particle model to obtain the number of the hair layer particles.


Optionally, after layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method further includes:

    • sorting all the hair particles in the hair particle model from near to far according to the respective distances of the hair particles from the specified reference point based on the hair layer identifiers respectively corresponding to the hair particles, to generate the hair particle order parameter.


Optionally, the method further includes:

    • classifying all the hair particles in the hair particle model based on the number of hair layer particles and the hair particle order parameter to obtain a plurality of hair particle sets, wherein ones of the hair particles in each of the hair particle sets belong to a same hair layer of the hair layers and are arranged in order;
    • generating a hair particle sorting result based on the plurality of hair particle sets; and
    • determining the target hair particle currently involved in the rendering according to the hair particle sorting result.


Optionally, obtaining the rendered hair virtual model by determining the hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region includes:

    • obtaining an initial hair width corresponding to the target hair particle based on the particle attribute information of the target hair particle;
    • determining a width adjustment value for the initial hair width based on a distance between the target hair particle and the specified reference point;
    • adjusting the initial hair width based on the width adjustment value to obtain an adjusted hair width;
    • determining a plurality of region rendering vertices corresponding to the hair segment rendering region based on the adjusted hair width;
    • rendering a plurality of polygonal grids based on the plurality of region rendering vertices; and
    • generating the hair virtual model based on the plurality of polygon grids.


Optionally, after generating the hair virtual model based on the plurality of polygonal grids, the method further includes:

    • determining a hair rendering pixel located in the hair segment rendering region;
    • determining, based on a tangent line of the hair particle, a relative distance between the hair rendering pixel and the tangent line as a pixel distance; and
    • adjusting current transparency of the hair rendering pixel based on the pixel distance.


Optionally, the adjusting of the current transparency of the hair rendering pixel based on the pixel distance includes:

    • performing a smoothness calculation based on a pixel width of the hair rendering pixel, the pixel distance, a preset conversion ratio, and a specified constant to obtain a transparency adjustment value for the hair rendering pixel; and
    • adjusting the current transparency of the hair rendering pixel based on the transparency adjustment value to obtain an adjusted hair rendering pixel.


Some embodiments of the present disclosure provide methods and apparatuses for rendering a hair virtual model, computer devices, and storage media, which: obtain particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates a target hair rendering effect; obtain a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter indicates an order of a respective distance of each hair particle from a specified reference point in a model space; based on the hair particle order, determine a target hair particle of the hair particle currently involved in rendering; and finally obtain a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region. The embodiments of the present disclosure generate a hair particle model through simulation, and split hair into hair particles, and rendering the hair particles, which reduces video memory occupancy of a computer device and improves hair rendering efficiency. Moreover, by adjusting a pixel distance at a hair edge, it is possible to improve colour relaxation and an anti-aliasing effect of the hair during hair rendering, thereby improving authenticity of target hair.


Detailed implementation of the above operations may be referred to aforementioned embodiments, and will not be repeated herein.


Optionally, as shown in FIG. 4, the computer device 300 further includes a touch screen 303, a radio frequency circuit 304, an audio circuit 305, an input unit 306, and a power supply 307. The processor 301 is electrically connected to the touch screen 303, the radio frequency circuit 304, the audio circuit 305, the input unit 306, and the power supply 307, respectively. It will be appreciated by those skilled in the art that the computer device structure shown in FIG. 4 does not constitute a limitation on the computer device, and may include more or fewer components than illustrated, or may combine some components, or has different component arrangements.


The touch screen 303 may be configured to display a graphical player interface and to receive operational instructions generated by a player acting on the graphical user interface. The touch display screen 303 may include a display panel and a touch panel. The display panel may be used to display information input by or provided to the player and various graphical player interfaces of the computer device, which may be composed of graphics, text, icons, videos, and any combination thereof. Alternatively, the display panel may be configured in a form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect a touch operation (e.g., an operation of the player on or near the touch panel using any suitable object or accessory such as a finger, a stylus, etc.) of the player on or near the touch panel, and generate a corresponding operation instruction, and the operation instruction executes a corresponding program. Alternatively, the touch panel may include a touch detection device and a touch controller. The touch detection device detects a touch orientation of the player, detects a signal brought about by the touch operation, and transmits the signal to the touch controller. The touch controller receives touch information from the touch detection device and converts the touch information into contact coordinates, sends the contact coordinates to the processor 301, and can receive and execute commands sent from the processor 301. The touch panel may cover the display panel, and when the touch panel detects the touch operation on or near the touch panel, the touch panel transmits the touch operation to the processor 301 to determine a type of a touch event. Then, the processor 301 provides a corresponding visual output on the display panel according to the type of the touch event. In the embodiments of the present disclosure, the touch panel and the display panel may be integrated to the touch screen 303 to implement input and output functions. However, in some embodiments, the touch panel and the touch panel may be implemented as two separate components to implement the input and output functions. That is, the touch screen 303 may implement the input function as part of the input unit 306.


In the embodiments of the present disclosure, a game application is executed by the processor 301 to generate a graphical player interface on the touch display screen 303. The touch screen 303 may be configured to display the graphical player interface and to receive operation instructions generated by the player acting on the graphical user interface.


The radio frequency circuit 304 may be configured to transmit and receive radio frequency signals to establish wireless communication with a network device or other computer devices through wireless communication, and to transmit and receive signals between the network device or other computer devices.


The audio circuit 305 may be used to provide an audio interface between the player and the computer device through a speaker and a microphone. The audio circuit 305 may transmit an electrical signal converted from received audio data to a loudspeaker, and the loudspeaker converts the electrical signal into a sound signal for output. On the other hand, the microphone converts a collected sound signal into an electrical signal, the audio circuit 305 receives and converts the electrical signal into audio data. The audio data is outputted to the processor 301 for processing, and the processed audio data is sent to, for example, another computer device through the radio frequency circuit 304, or the audio data is outputted to the memory 302 for further processing. The audio circuit 305 may also include an earplug jack to provide communication between a peripheral headset and the computer device.


The input unit 306 may be configured to receive input numbers, character information, or player characteristic information (e.g., fingerprints, iris, face information), and to generate keyboard, mouse, joystick, optical, or trackball signal input related to player settings and functional control.


The power supply 307 is configured to power various components of the computer device 300. Alternatively, the power supply 307 may be logically connected to the processor 301 through a power supply management system, so that functions such as charging, discharging, and power consumption management are managed through the power supply management system. The power supply 307 may further include one or more DC or AC power supplies, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, or any other component.


Although not shown in FIG. 4, the computer device 300 may also include a camera, a sensor, a wireless fidelity module, a Bluetooth module, and the like, and details are not described herein.


In the above-mentioned embodiments, the description of each embodiment has its own emphasis, and parts not described in detail in a certain embodiment may be referred to related description of other embodiments.


As can be seen from the above, on one hand, the computer device provided in the embodiments generates the hair particle model through simulation, splits hair into hair particles, and renders the hair particles, which achieves a semi-transparent effect of the target hair reduces video memory occupancy of a computer device, and improves hair rendering efficiency. On the other hand, the embodiments of the present disclosure may further expand each hair particle along a direction perpendicular to a tangent line of hair. By adjusting a pixel distance at a hair edge to adjust current transparency of a hair rendering pixel, it is possible to improve colour relaxation and an anti-aliasing effect of the hair during hair rendering, thereby improving authenticity of the target hair.


It will be appreciated by those of ordinary skill in the art that all or a portion of the operations of the various methods of the above-described embodiments may be performed by instructions, which may be stored in a computer-readable storage medium and loaded and executed by a processor, or may be performed by the instructions through controlling relevant hardware.


To this end, embodiments of the present disclosure provide a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform operations in the method for rendering the hair virtual model provided in embodiments of the present disclosure. For example, the computer programs may perform the following operations:

    • obtaining particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates a target hair rendering effect;
    • obtaining a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter indicates an order of a respective distance of each hair particle from a specified reference point in a model space;
    • based on the hair particle order, determining a target hair particle of the hair particle currently involved in rendering; and
    • obtaining a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region.


Optionally, after obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:

    • obtaining a number of hair layer particles corresponding to the hair particle model, wherein the number of hair layer particles indicates a number of hair particles in each hair layer of the hair particle model, and the hair layer is obtained by layering the model space of the hair particle model based on the specified reference point; and
    • determining the target hair particle currently involved in rendering based on the number of hair layer particles and the hair particle order.


Optionally, before obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:

    • layering the model space based on a relative distance between the hair particle model and the specified reference point and a specified coordinate axis of the model space, to obtain a plurality of hair layers and a total number of the plurality of hair layers;
    • determining a number of hair layers corresponding to hair layers where all hair particles are located in the hair particle model based on the relative distance and the total number of the plurality of hair layers; and
    • determining the number of the hair layer particles and the hair particle order parameter based on the number of hair layers corresponding to the hair layers where the all hair particles are located in the hair particle model.


Optionally, layering the model space based on the relative distance between the hair particle model and the specified reference point and the specified coordinate axis of the model space, to obtain the plurality of hair layers and the total number of the plurality of hair layers includes:

    • determining a first distance and a second distance between the hair particle model and the specified reference point in the model space, wherein the first distance indicates a minimum distance between the hair particle model and the specified reference point and the second distance indicates a maximum distance between the hair particle model and the specified reference point; and
    • layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers.


Optionally, after layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method includes:

    • determining a number of hair layers corresponding to a hair layer where the hair particles are located based on the first distance, the second distance, and a target distance corresponding to the hair particle indicating a distance between the hair particle and the specified reference point;
    • generating a hair layer identifier based on the number of the hair layers, and associating the hair layer identifier with the hair particle; and
    • performing a prefix sum processing based on hair layer identifiers corresponding to all hair particles in the hair particle model to obtain the number of the hair layer particles.


Optionally, after layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method further includes:

    • sorting all the hair particles in the hair particle model from near to far according to the respective distances of the hair particles from the specified reference point based on the hair layer identifiers respectively corresponding to the hair particles, to generate the hair particle order parameter.


Optionally, the method further includes:

    • classifying all the hair particles in the hair particle model based on the number of hair layer particles and the hair particle order parameter to obtain a plurality of hair particle sets, wherein ones of the hair particles in each of the hair particle sets belong to a same hair layer of the hair layers and are arranged in order;
    • generating a hair particle sorting result based on the plurality of hair particle sets; and
    • determining the target hair particle currently involved in the rendering according to the hair particle sorting result.


Optionally, obtaining the rendered hair virtual model by determining the hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region includes:

    • obtaining an initial hair width corresponding to the target hair particle based on the particle attribute information of the target hair particle;
    • determining a width adjustment value for the initial hair width based on a distance between the target hair particle and the specified reference point;
    • adjusting the initial hair width based on the width adjustment value to obtain an adjusted hair width;
    • determining a plurality of region rendering vertices corresponding to the hair segment rendering region based on the adjusted hair width;
    • rendering a plurality of polygonal grids based on the plurality of region rendering vertices; and
    • generating the hair virtual model based on the plurality of polygon grids.


Optionally, after generating the hair virtual model based on the plurality of polygonal grids, the method further includes:

    • determining a hair rendering pixel located in the hair segment rendering region;
    • determining, based on a tangent line of the hair particle, a relative distance between the hair rendering pixel and the tangent line as a pixel distance; and
    • adjusting current transparency of the hair rendering pixel based on the pixel distance.


Optionally, the adjusting of the current transparency of the hair rendering pixel based on the pixel distance includes:

    • performing a smoothness calculation based on a pixel width of the hair rendering pixel, the pixel distance, a preset conversion ratio, and a specified constant to obtain a transparency adjustment value for the hair rendering pixel; and
    • adjusting the current transparency of the hair rendering pixel based on the transparency adjustment value to obtain an adjusted hair rendering pixel.


Some embodiments of the present disclosure provide methods and apparatuses for rendering a hair virtual model, computer devices, and storage media, which: obtain particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates a target hair rendering effect; obtain a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter indicates an order of a respective distance of each hair particle from a specified reference point in a model space; based on the hair particle order, determine a target hair particle of the hair particle currently involved in rendering; and finally obtain a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region. The embodiments of the present disclosure generate a hair particle model through simulation, and split hair into hair particles, and rendering the hair particles, which reduces video memory occupancy of a computer device and improves hair rendering efficiency. Moreover, by adjusting a pixel distance at a hair edge, it is possible to improve colour relaxation and an anti-aliasing effect of the hair during hair rendering, thereby improving authenticity of target hair.


Detailed implementation of the above operations may be referred to aforementioned embodiments, and will not be repeated herein.


The storage medium may include a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or the like.


Due to the computer programs stored in the storage medium, the operations in any of a method for rendering a hair virtual model provided in the embodiments of the present disclosure may be executed. Thus, advantageous effects that can be achieved in any one of the methods for rendering the hair virtual model provided in the embodiments of the present disclosure can be realized. Details may be referred to the foregoing embodiments, which are not described herein.


In the above-mentioned embodiments, the description of each embodiment has its own emphasis. For parts not described in detail in a certain embodiment, related description of other embodiments may be referred to.


The methods and apparatuses for rendering a hair virtual model, computer devices, and storage media provided in the embodiments of the present disclosure are introduced in detail, and the principles and embodiments of the present disclosure are described herein using specific examples. The description of the above embodiments merely aims to help to understand the technical solutions and the core concept of the present disclosure. It will be appreciated by those of ordinary skill in the art that modifications may still be made to the technical solutions described in the foregoing embodiments, or equivalents may be made to some of the technical features therein. However, these modifications or equivalents do not depart the essence of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present disclosure.

Claims
  • 1. A method for rendering a hair virtual model, comprising: obtaining, by a terminal, particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates the target hair rendering effect;obtaining, by the terminal, a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter indicates an order of respective distances of each hair particle in the hair particle model from a specified reference point in a model space of the hair particle model;based on the hair particle order parameter, determining a target hair particle of the hair particle in the hair particle model for rendering; andobtaining, by the terminal, a rendered hair virtual model for the target hair particle by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region.
  • 2. The method for rendering the hair virtual model of claim 1, further comprising: obtaining a number of hair particles in a hair layer corresponding to the hair particle model, wherein the number of hair particles in the hair layer indicates a number of hair particles in the hair layer of the hair particle model, and the hair layer is obtained by layering the model space of the hair particle model based on the specified reference point; anddetermining the target hair particle based on the number of hair particles in the hair layer and the hair particle order parameter.
  • 3. The method for rendering the hair virtual model of claim 2, further comprising: layering the model space based on a relative distance between the hair particle model and the specified reference point and a specified coordinate axis of the model space, to obtain a plurality of hair layers and a total number of the plurality of hair layers;determining a hair layer number of the hair layer where the hair particles are located in the hair particle model based on the relative distance and the total number of the plurality of hair layers; anddetermining the number of hair particles in the hair layer and the hair particle order parameter based on the hair layer number of the hair layer.
  • 4. The method for rendering the hair virtual model of claim 3, wherein layering the model space further comprises: determining a first distance and a second distance between the hair particle model and the specified reference point in the model space, wherein the first distance indicates a minimum distance between the hair particle model and the specified reference point and the second distance indicates a maximum distance between the hair particle model and the specified reference point; andlayering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space.
  • 5. The method for rendering the hair virtual model of claim 4, wherein determining the hair layer number of the hair layer comprises: determining the hair layer number of the hair layer where the hair particles are located based on the first distance, the second distance, and a target distance corresponding to the hair particle indicating a distance between the hair particle and the specified reference point; anddetermining the number of hair particles in the hair layer comprises:generating a hair layer identifier of the hair layer based on the hair layer number, and associating the hair layer identifier with the hair particle; andperforming a prefix sum processing based on the hair layer identifier corresponding to the hair particles in the hair layer in the hair particle model to obtain the number of the hair particles in the hair layer.
  • 6. The method for rendering the hair virtual model of claim 5, wherein determining the hair particle order parameter comprises: sorting all hair particles in the hair particle model from near to far according to respective distances of all the hair particles from the specified reference point based on hair layer identifiers respectively corresponding to the hair particles, to generate the hair particle order parameter.
  • 7. The method for rendering the hair virtual model of claim 5, wherein determining the target hair particle based on the number of hair particles in the hair layer and the hair particle order parameter comprises: classifying all hair particles in the hair particle model based on a number of hair particles in each of the plurality of hair layers and the hair particle order parameter to obtain a plurality of hair particle sets, wherein ones of all the hair particles in each of the hair particle sets belong to a same hair layer of the plurality of hair layers and are arranged in order;generating a hair particle sorting result based on the plurality of hair particle sets; anddetermining the target hair particle according to the hair particle sorting result.
  • 8. The method for rendering the hair virtual model of claim 1, wherein obtaining the rendered hair virtual model comprises: obtaining an initial hair width corresponding to the target hair particle based on the particle attribute information of the target hair particle;determining a width adjustment value for the initial hair width based on a distance between the target hair particle and the specified reference point;adjusting the initial hair width based on the width adjustment value to obtain an adjusted hair width;determining a plurality of region rendering vertices corresponding to the hair segment rendering region based on the adjusted hair width;rendering a plurality of polygonal grids based on the plurality of region rendering vertices; andgenerating the rendered hair virtual model based on the plurality of polygonal grids.
  • 9. The method for rendering the hair virtual model of claim 8, further comprising: determining a hair rendering pixel located in the hair segment rendering region;determining a relative distance between the hair rendering pixel and a tangent line of the target hair particle as a pixel distance; andadjusting current transparency of the hair rendering pixel based on the pixel distance.
  • 10. The method for rendering the hair virtual model of claim 9, wherein the adjusting of the current transparency of the hair rendering pixel based on the pixel distance comprises: performing a smoothness calculation based on a pixel width of the hair rendering pixel, the pixel distance, a preset conversion ratio, and a specified constant to obtain a transparency adjustment value for the hair rendering pixel; andadjusting the current transparency of the hair rendering pixel based on the transparency adjustment value to obtain an adjusted hair rendering pixel.
  • 11. (canceled)
  • 12. A computer device, comprising a processor and a memory storing a computer program executable by the processor to perform operations comprising: obtaining particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates the target hair rendering effect;obtaining a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter indicates an order of respective distances of each hair particle in the hair particle model from a specified reference point in a model space of the hair particle model;based on the hair particle order parameter, determining a target hair particle of the hair particle in the hair particle model for rendering; andobtaining a rendered hair virtual model for the target hair particle by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region.
  • 13. A non-transitory computer-readable storage medium storing a computer program executable by a processor to perform operations comprising: obtaining particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates the target hair rendering effect;obtaining a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter indicates an order of respective distances of each hair particle in the hair particle model from a specified reference point in a model space of the hair particle model;based on the hair particle order parameter, determining a target hair particle of the hair particle in the hair particle model for rendering; andobtaining a rendered hair virtual model for the target hair particle by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region.
  • 14. The method of claim 1, wherein the particle attribute information comprises a location of the hair particle in the model space of the hair particle model and a tangential attribute of the hair particle.
  • 15. The method of claim 4, wherein layering the model space comprises: layering the model space into a preset number of hair layers based on the first distance, the second distance, and the specified coordinate axis of the model space.
  • 16. The method of claim 15, wherein the preset number of hair layers is 1024 or 2048.
  • 17. The method of claim 15, wherein a thickness of each of the preset number of hair layers represents a distance interval between the hair particle and is determined according to a following equation:
  • 18. The method of claim 15, wherein determining the hair layer number of the hair layer where the hair particles are located based on the first distance, the second distance, and the target distance comprises: determining the hair layer number of the hair layer according to a following equation:
  • 19. The computer device of claim 12, wherein the operations further comprise: obtaining a number of hair particles in a hair layer corresponding to the hair particle model, wherein the number of hair particles in the hair layer indicates a number of hair particles in the hair layer of the hair particle model, and the hair layer is obtained by layering the model space of the hair particle model based on the specified reference point; anddetermining the target hair particle based on the number of hair particles in the hair layer and the hair particle order parameter.
  • 20. The computer device of claim 12, wherein the operations further comprise: layering the model space based on a relative distance between the hair particle model and the specified reference point and a specified coordinate axis of the model space, to obtain a plurality of hair layers and a total number of the plurality of hair layers;determining a hair layer number of the hair layer where the hair particles are located in the hair particle model based on the relative distance and the total number of the plurality of hair layers; anddetermining the number of hair layer particles in the hair layer and the hair particle order parameter based on the hair layer number of the hair layer
  • 21. The non-transitory computer-readable storage medium of claim 13, wherein obtaining the rendered hair virtual model comprises: obtaining an initial hair width corresponding to the target hair particle based on the particle attribute information of the target hair particle;determining a width adjustment value for the initial hair width based on a distance between the target hair particle and the specified reference point;adjusting the initial hair width based on the width adjustment value to obtain an adjusted hair width;determining a plurality of region rendering vertices corresponding to the hair segment rendering region based on the adjusted hair width; andrendering a plurality of polygonal grids based on the plurality of region rendering vertices; andgenerating the rendered hair virtual model based on the plurality of polygonal grids.
Priority Claims (1)
Number Date Country Kind
202210488443.4 May 2022 CN national
PCT Information
Filing Document Filing Date Country Kind
PCT/CN2022/119162 9/15/2022 WO