This application claims priority to Chinese Patent Application No. 202210488443.4, filed in the Chinese Patent Office on May 6, 2022, and entitled “METHODS AND APPARATUSES FOR RENDERING A HAIR VIRTUAL MODEL, COMPUTER DEVICES, AND STORAGE MEDIA”. The disclosures of the above application are incorporated herein by reference in their entireties.
The present disclosure relates to computer image processing technologies of a computer, and more particularly, to methods and apparatuses for rendering a hair virtual model, computer devices, and storage media.
With the continuous development of computer communication technology and wide popularization and application of terminals such as smartphones, tablets, and notebook computers, the terminals are developed in a diversified and personalized direction and gradually becoming indispensable terminals for people in life and work. To satisfy people's pursuit of spiritual life, entertainment games capable of operating on the terminals are emerging and more terminal games are emerging. The terminal games have become indispensable living and entertainment ways. For a user to obtain a better game embodiment, many terminal games are often constructed based on real characters and scenes. Therefore, during a game design, a game scene in a game design is expected to be closer to a real situation.
In practical game design projects, it is common to simulate human characters and animal characters, for example, to simulate hair of the human characters and hair of the animal characters. During hair simulation, to ensure that the hair can be displayed normally and avoid a display disorder of the hair due to change in a viewing angle of a game, the hair is usually rendered in a semi-transparent sorting manner and an anti-aliasing effect is realized.
Some embodiments of the present disclosure provide methods and apparatuses for rendering a hair virtual model, computer devices, and storage media, which can solve a problem in the prior art that a video memory occupation of a computer device rapidly increases when a hair semi-transparent sorting rendering and an anti-aliasing effect are realized, which causes inefficiency in hair rendering.
In a first aspect, one or more embodiments of the present disclosure provide a method for rendering a hair virtual model, including:
In some embodiments, after obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:
In some embodiments, before obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:
In some embodiments, the layering the model space based on the relative distance between the hair particle model and the specified reference point and the specified coordinate axis of the model space, to obtain the hair layers and the total number of the hair layers includes:
In some embodiments, after layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the hair layers and the total number of the hair layers, the method further includes:
In some embodiments, after layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method further includes:
In some embodiments, the method further includes:
In some embodiments, obtaining the rendered hair virtual model by determining the hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region includes:
In some embodiments, after generating the hair virtual model based on the plurality of polygon grids, the method further includes:
In some embodiments, adjusting the current transparency of the hair rendering pixel based on the pixel distance includes:
In a second aspect, one or more embodiments of the present disclosure further provides an apparatus for rendering a hair virtual model, including:
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
In a third aspect, one or more embodiments of the present disclosure further provide a computer device, including a processor and a memory storing a computer program and a processor, wherein when calling the computer program in the memory, executable by the processor to perform the method for rendering the hair virtual model.
In a fourth aspect, one or more embodiments of the present disclosure further provide a storage medium storing a computer program, and when executed by a processor, the computer program implements the method for rendering the hair virtual model.
Some embodiments of the present disclosure provide methods and apparatuses for rendering a hair virtual model, computer devices, and storage media, which: obtain particle attribute information of a hair particle in a hair particle model; obtain a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter is used to indicate a distance order indicating a respective distance between each particle and a designated reference point in a model space; then determines, based on the hair particle order, a target hair particle of the hair particle currently involved in rendering; and finally obtain a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region. The embodiments of the present disclosure generate a hair particle model through simulation, and split hair into hair particles, and render the hair particles, which reduces video memory occupancy of a computer device and improves hair rendering efficiency. Moreover, by adjusting a pixel distance at a hair edge, it is possible to improve colour relaxation and an anti-aliasing effect of the hair during hair rendering, thereby improving authenticity of target hair.
To describe technical solutions in some embodiments of the present disclosure more clearly, accompanying drawings required for describing the embodiments will be introduced briefly. It is apparent that the accompanying drawings in the following description are merely embodiments of the present disclosure, and other drawings may be obtained according to these drawings by those skilled in the art without involving any inventive effort.
Technical solutions in the embodiments of the present disclosure will be clearly and completely described below in connection with the accompanying drawings in the embodiments of the present disclosure. It is apparent that the described embodiments are only part of embodiments of the present disclosure, and not all embodiments thereof. Based on the embodiments in the present application, all other embodiments obtained by a person having skill in the art without any inventive effort fall within the protection scope of the present application.
Some embodiments of the present disclosure provide methods and apparatuses for rendering a hair virtual model, computer devices, and storage media. Specifically, the methods for rendering the hair virtual model of the embodiments of the present disclosure may be performed by a computer device, which may be a terminal device. The terminal may be a smartphone, a tablet computer, a notebook computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), or the like.
For example, when the methods for rendering the hair virtual model is run on the terminal device, the terminal device provides a graphical user interface to a user through a plurality of ways. For example, the graphical user interface may be rendered for display on a display screen of the terminal device, or may be presented by holographic projection. For example, the terminal device may include a touch screen and a processor. The touch screen may be configured to display the graphical user interface and to receive operation instructions generated by the user acting on the graphical user interface. The graphical user interface includes a display screen. The processor is configured to generate the graphical user interface, respond to the operation instruction, and control a display of the graphical user interface on the touch display.
Some embodiments of the present disclosure provide methods and apparatuses for rendering a hair virtual model, computer devices, and storage media, which will be described in detail below. It should be noted that the order in which the following embodiments are described is not intended to limit a preferred order of the embodiments.
Referring to
In operation 101, particle attribute information of a hair particle in a hair particle model is obtained. The hair particle model simulates a target hair rendering effect.
In the embodiments of the present disclosure, the particle attribute information includes a location of the hair particle in a model space and a tangential attribute of the hair particle.
In operation 102, a hair particle order parameter corresponding to the hair particle model is obtained. The hair particle order parameter indicates an order of respective distances of hair particles from a specified reference point in a model space.
In an embodiment, after the operation of obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:
In another embodiment, before the operation of obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:
Specifically, the operation of layering the model space based on the relative distance between the hair particle model and the specified reference point and the specified coordinate axis of the model space, to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method may include:
Further, after the operation of layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method may include:
In another embodiment, after the operation of layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method may include:
Optionally, in one or more embodiments of the present disclosure, all the hair particles in the hair particle model may be classified based on the number of hair layer particles and the hair particle order parameter to obtain a plurality of hair particle sets. Ones of the hair particles in each of the hair particle sets belong to a same hair layer of the hair layers and are arranged in order. A hair particle sorting result may be generated based on the plurality of hair particle sets.
The target hair particle currently involved in the rendering may be determined according to the hair particle sorting result.
In operation 103, a target hair particle of the hair particle currently involved in rendering is determined based on the hair particle order.
In operation 104, a rendered hair virtual model is obtained by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region.
In order to make target hair more realistic, the operation of obtaining the rendered hair virtual model by determining the hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region includes:
Further, after the operation of generating the hair virtual model based on the plurality of polygonal grids, the method may include:
In order to improve an anti-aliasing effect of the target hair, the operation of adjusting the current transparency of the hair rendering pixels based on the pixel distance may include:
In a summary, the embodiments of the present disclosure provide method for rendering a hair virtual model. On one hand, the method includes: obtaining particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates a target hair rendering effect; obtaining a hair particle order parameter corresponding to the hair particle model; determining, based on the hair particle order, a target hair particle of the hair particle currently involved in rendering; and obtaining a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region. By generating the hair particle model through simulation, splitting hair into hair particles, and renders the hair particles, a semi-transparent effect of the target hair is achieved accurately. Moreover, since according to the embodiments of the present disclosure, there is no need to create a linked list for each pixel, computing resource consumption of the hair rendering is reduced, thereby reducing video memory occupancy of a computer device and improves hair rendering efficiency. On the other hand, the embodiments of the present disclosure may further expand each hair particle along a direction perpendicular to a tangent line of hair. By adjusting a pixel distance at a hair edge to adjust current transparency of a hair rendering pixel, it is possible to improve colour relaxation and an anti-aliasing effect of the hair during hair rendering, thereby improving authenticity of the target hair.
Specifically, some embodiments of the present disclosure further provide a specific application method for rendering a hair virtual model. The specific method may be as follows.
(1) A minimum distance Min and a maximum distance Max of the hair particles in the hair particle model from a virtual camera (i.e., the specified reference point) can be estimated at a Central Processing Unit (CPU) end. Then, a model space between the minimum distance Min and the maximum distance Max is divided into a specified number of divisions (LayerCount) of hair layers (Layer). Thereafter, the hair particles are placed in different Layers according to a distance between each of the hair particles and the specified reference point to achieve approximate ordering. In some embodiments of the present disclosure, the specified number of divisions may be 1024, 2048, etc., and may be adjusted according to actual conditions. For example, the minimum distance Min and the maximum distance Max of the hair particle model may be calculated to be 20 cm and 40 cm, respectively. In some embodiments of the present disclosure, the model space between 20 cm and 40 cm may be divided into 1024 hair layers, (40−20)/1024 cm is a thickness of each hair layer, and each hair layer represents a distance interval.
(2) An empty Buffer for storing data as LayerCount may be constructed at a GPU end and the empty Buffer is named as CounterBuffer. The CounterBuffer is used to store a number of hair particles in each hair layer. Further, it is also possible to construct, at the GPU end, a Buffer having a length of a total number (ParticleCount) of hair particles in the hair particle model, and name the Buffer as RenderKeyBuffer. The RenderKeyBuffer is used to record hair particle identifiers whose distance from the specified reference point is from near to far.
(3) A position Xi of a hair particle i in the model space can be read in parallel at the GPU side, and a distance between the hair particle and the virtual camera is calculated, and then the number of hair layers corresponding to the hair particle is calculated according to a hair particle identifier calculation formula as below, and then an atom of CounterBuffer[LayerId] is incremented by one.
where dis is a distance between the hair particle and the virtual camera, min is the minimum distance of the hair particles from the virtual camera (i.e., the specified reference point), and max is the maximum distance of the hair particles from the virtual camera (i.e., the specified reference point).
(4) After the number of hair layers corresponding to each hair particle in the hair particle model is obtained, parallel prefix sum calculation is performed on the CounterBuffer at the GPU end to obtain a total number of hair particles in each hair layer.
(5) After the number of hair layers corresponding to each hair particle in the hair particle model is obtained, [LayerId] in the CounterBuffer is recorded as WritingPoint, and the atom of the CounterBuffer[LayerId] is incremented by one. When RenderKeyBuffer[WritingPoint] is set to i, Ids of HairParticle from near to far are stored in the RenderKeyBuffer.
(6) Referring to
(7) In a PixelShader, the PixelWidth and a PixelDistance are determined, respectively. A coverage degree of the hair on a current pixel is calculated according to a smoothness calculation formula to calculate a transparency value (Alpha) of the current pixel. Finally, Alpha mixing is performed on pixels through GPU hardware.
Smoothstep(0,1,PixelWidth−lengthtopixel×(PixelDistance)+0.5),
where LenghtToPixel is a conversion ratio of a world space length to a pixel space length under a current depth, and 0.5 is an empirical value, which can be replaced according to actual situations.
Specifically, for y=smoothstep (0, 1, x), the above [PixelWidth−lengthtopixel×(PixelDistance)+0.5] is replaced with x. If x is calculated to be less than 0, a smoothness value is 0; if x is calculated to be greater than or equal to 0 or less than or equal to 1, the smoothness value is 3x2−2x3; if x is calculated to be greater than 1, the smoothness value is 1, so that the smoothness value smoothly transitions from “0” to “1”, thereby realizing a smooth transition.
To facilitate better implementation of the method for rendering the hair virtual model of the embodiments of the present disclosure, some embodiments of the present disclosure further provide an apparatus for rendering a hair virtual model, wherein meanings of nouns are the same as in the above-mentioned method for rendering the hair virtual model. Specific implementation details may be referred to the description in the method embodiments.
Referring to
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
In some embodiments, the apparatus further includes:
Some embodiments of the present disclosure provides an apparatus for rendering a hair virtual model, wherein: a first obtaining unit 201 obtains particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates a target hair rendering effect; a second obtaining unit 202 obtains a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter indicates an order of a respective distance of each hair particle from a specified reference point in a model space; a first determination unit 203, based on the hair particle order, determines a target hair particle of the hair particle currently involved in rendering; and a second determination unit obtains a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region. On one hand, the embodiments of the present invention generate the hair particle model through simulation, split hair into hair particles, and render the hair particles, which achieves a semi-transparent effect of the target hair, reduces video memory occupancy of a computer device, and improves hair rendering efficiency. On the other hand, the embodiments of the present disclosure may further expand each hair particle along a direction perpendicular to a tangent line of hair. By adjusting a pixel distance at a hair edge to adjust current transparency of a hair rendering pixel, it is possible to improve colour relaxation and an anti-aliasing effect of the hair during hair rendering, thereby improving authenticity of the target hair.
Accordingly, some embodiments of the present disclosure further provide a computer device. The computer device may be a terminal or a server. The terminal may be a smartphone, a tablet computer, a notebook computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), or the like. As shown in
The processor 301 is a control centre of the computer device 300, connects various parts of the computer device 300 by various interfaces and lines, and performs various functions of the computer device 300 and processes data by running or loading software programs and/or modules stored in the memory 302 and invoking data stored in the memory 302, thereby monitoring the computer device 300 as a whole.
In some embodiments of the present disclosure, the processor 301 in the computer device 300 loads instructions corresponding to processes of one or more application programs into the memory 302 according to the following operations, and runs the application programs stored in the memory 302 to implement various functions:
Optionally, after obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:
Optionally, before obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:
Optionally, the layering the model space based on the relative distance between the hair particle model and the specified reference point and the specified coordinate axis of the model space, to obtain the plurality of hair layers and the total number of the plurality of hair layers includes:
Optionally, after layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method includes:
Optionally, after layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method further includes:
Optionally, the method further includes:
Optionally, obtaining the rendered hair virtual model by determining the hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region includes:
Optionally, after generating the hair virtual model based on the plurality of polygonal grids, the method further includes:
Optionally, the adjusting of the current transparency of the hair rendering pixel based on the pixel distance includes:
Some embodiments of the present disclosure provide methods and apparatuses for rendering a hair virtual model, computer devices, and storage media, which: obtain particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates a target hair rendering effect; obtain a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter indicates an order of a respective distance of each hair particle from a specified reference point in a model space; based on the hair particle order, determine a target hair particle of the hair particle currently involved in rendering; and finally obtain a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region. The embodiments of the present disclosure generate a hair particle model through simulation, and split hair into hair particles, and rendering the hair particles, which reduces video memory occupancy of a computer device and improves hair rendering efficiency. Moreover, by adjusting a pixel distance at a hair edge, it is possible to improve colour relaxation and an anti-aliasing effect of the hair during hair rendering, thereby improving authenticity of target hair.
Detailed implementation of the above operations may be referred to aforementioned embodiments, and will not be repeated herein.
Optionally, as shown in
The touch screen 303 may be configured to display a graphical player interface and to receive operational instructions generated by a player acting on the graphical user interface. The touch display screen 303 may include a display panel and a touch panel. The display panel may be used to display information input by or provided to the player and various graphical player interfaces of the computer device, which may be composed of graphics, text, icons, videos, and any combination thereof. Alternatively, the display panel may be configured in a form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect a touch operation (e.g., an operation of the player on or near the touch panel using any suitable object or accessory such as a finger, a stylus, etc.) of the player on or near the touch panel, and generate a corresponding operation instruction, and the operation instruction executes a corresponding program. Alternatively, the touch panel may include a touch detection device and a touch controller. The touch detection device detects a touch orientation of the player, detects a signal brought about by the touch operation, and transmits the signal to the touch controller. The touch controller receives touch information from the touch detection device and converts the touch information into contact coordinates, sends the contact coordinates to the processor 301, and can receive and execute commands sent from the processor 301. The touch panel may cover the display panel, and when the touch panel detects the touch operation on or near the touch panel, the touch panel transmits the touch operation to the processor 301 to determine a type of a touch event. Then, the processor 301 provides a corresponding visual output on the display panel according to the type of the touch event. In the embodiments of the present disclosure, the touch panel and the display panel may be integrated to the touch screen 303 to implement input and output functions. However, in some embodiments, the touch panel and the touch panel may be implemented as two separate components to implement the input and output functions. That is, the touch screen 303 may implement the input function as part of the input unit 306.
In the embodiments of the present disclosure, a game application is executed by the processor 301 to generate a graphical player interface on the touch display screen 303. The touch screen 303 may be configured to display the graphical player interface and to receive operation instructions generated by the player acting on the graphical user interface.
The radio frequency circuit 304 may be configured to transmit and receive radio frequency signals to establish wireless communication with a network device or other computer devices through wireless communication, and to transmit and receive signals between the network device or other computer devices.
The audio circuit 305 may be used to provide an audio interface between the player and the computer device through a speaker and a microphone. The audio circuit 305 may transmit an electrical signal converted from received audio data to a loudspeaker, and the loudspeaker converts the electrical signal into a sound signal for output. On the other hand, the microphone converts a collected sound signal into an electrical signal, the audio circuit 305 receives and converts the electrical signal into audio data. The audio data is outputted to the processor 301 for processing, and the processed audio data is sent to, for example, another computer device through the radio frequency circuit 304, or the audio data is outputted to the memory 302 for further processing. The audio circuit 305 may also include an earplug jack to provide communication between a peripheral headset and the computer device.
The input unit 306 may be configured to receive input numbers, character information, or player characteristic information (e.g., fingerprints, iris, face information), and to generate keyboard, mouse, joystick, optical, or trackball signal input related to player settings and functional control.
The power supply 307 is configured to power various components of the computer device 300. Alternatively, the power supply 307 may be logically connected to the processor 301 through a power supply management system, so that functions such as charging, discharging, and power consumption management are managed through the power supply management system. The power supply 307 may further include one or more DC or AC power supplies, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, or any other component.
Although not shown in
In the above-mentioned embodiments, the description of each embodiment has its own emphasis, and parts not described in detail in a certain embodiment may be referred to related description of other embodiments.
As can be seen from the above, on one hand, the computer device provided in the embodiments generates the hair particle model through simulation, splits hair into hair particles, and renders the hair particles, which achieves a semi-transparent effect of the target hair reduces video memory occupancy of a computer device, and improves hair rendering efficiency. On the other hand, the embodiments of the present disclosure may further expand each hair particle along a direction perpendicular to a tangent line of hair. By adjusting a pixel distance at a hair edge to adjust current transparency of a hair rendering pixel, it is possible to improve colour relaxation and an anti-aliasing effect of the hair during hair rendering, thereby improving authenticity of the target hair.
It will be appreciated by those of ordinary skill in the art that all or a portion of the operations of the various methods of the above-described embodiments may be performed by instructions, which may be stored in a computer-readable storage medium and loaded and executed by a processor, or may be performed by the instructions through controlling relevant hardware.
To this end, embodiments of the present disclosure provide a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform operations in the method for rendering the hair virtual model provided in embodiments of the present disclosure. For example, the computer programs may perform the following operations:
Optionally, after obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:
Optionally, before obtaining the hair particle order parameter corresponding to the hair particle model, the method further includes:
Optionally, layering the model space based on the relative distance between the hair particle model and the specified reference point and the specified coordinate axis of the model space, to obtain the plurality of hair layers and the total number of the plurality of hair layers includes:
Optionally, after layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method includes:
Optionally, after layering the model space based on the first distance, the second distance, and the specified coordinate axis of the model space to obtain the plurality of hair layers and the total number of the plurality of hair layers, the method further includes:
Optionally, the method further includes:
Optionally, obtaining the rendered hair virtual model by determining the hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region includes:
Optionally, after generating the hair virtual model based on the plurality of polygonal grids, the method further includes:
Optionally, the adjusting of the current transparency of the hair rendering pixel based on the pixel distance includes:
Some embodiments of the present disclosure provide methods and apparatuses for rendering a hair virtual model, computer devices, and storage media, which: obtain particle attribute information of a hair particle in a hair particle model, wherein the hair particle model simulates a target hair rendering effect; obtain a hair particle order parameter corresponding to the hair particle model, wherein the hair particle order parameter indicates an order of a respective distance of each hair particle from a specified reference point in a model space; based on the hair particle order, determine a target hair particle of the hair particle currently involved in rendering; and finally obtain a rendered hair virtual model by determining a hair segment rendering region based on the particle attribute information of the target hair particle and rendering the hair segment rendering region. The embodiments of the present disclosure generate a hair particle model through simulation, and split hair into hair particles, and rendering the hair particles, which reduces video memory occupancy of a computer device and improves hair rendering efficiency. Moreover, by adjusting a pixel distance at a hair edge, it is possible to improve colour relaxation and an anti-aliasing effect of the hair during hair rendering, thereby improving authenticity of target hair.
Detailed implementation of the above operations may be referred to aforementioned embodiments, and will not be repeated herein.
The storage medium may include a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or the like.
Due to the computer programs stored in the storage medium, the operations in any of a method for rendering a hair virtual model provided in the embodiments of the present disclosure may be executed. Thus, advantageous effects that can be achieved in any one of the methods for rendering the hair virtual model provided in the embodiments of the present disclosure can be realized. Details may be referred to the foregoing embodiments, which are not described herein.
In the above-mentioned embodiments, the description of each embodiment has its own emphasis. For parts not described in detail in a certain embodiment, related description of other embodiments may be referred to.
The methods and apparatuses for rendering a hair virtual model, computer devices, and storage media provided in the embodiments of the present disclosure are introduced in detail, and the principles and embodiments of the present disclosure are described herein using specific examples. The description of the above embodiments merely aims to help to understand the technical solutions and the core concept of the present disclosure. It will be appreciated by those of ordinary skill in the art that modifications may still be made to the technical solutions described in the foregoing embodiments, or equivalents may be made to some of the technical features therein. However, these modifications or equivalents do not depart the essence of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202210488443.4 | May 2022 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2022/119162 | 9/15/2022 | WO |