The present disclosure relates to the field of computer vision technologies, and in particular to methods, apparatuses, devices and computer-readable storage media for action transfer.
Action transfer is to transfer an action of an initial object involved in an initial motion video to a target object to form a target motion video. Because the initial motion video and the target motion video greatly differ in structure and view angle, it is very difficult to realize action transfer at the level of pixel. Especially when the initial object makes an extreme action or the initial object and the target object greatly differ in structure, an accuracy of transferring an action to the target object is relatively low.
In view of this, the present disclosure at least provides an action transfer method and an action transfer apparatus.
According to a first aspect of embodiments of the present disclosure, a computer-implemented method of action transfer between objects is provided. The method includes: obtaining a first initial video involving an action sequence of an initial object; identifying a two-dimensional skeleton keypoint sequence of the initial object from multiple frames of image in the first initial video; converting the two-dimensional skeleton keypoint sequence of the initial object into a three-dimensional skeleton keypoint sequence of a target object; and generating a target video involving an action sequence of the target object based on the three-dimensional skeleton keypoint sequence of the target object.
In this aspect, by extracting the two-dimensional skeleton keypoint sequence, retargeting the two-dimensional skeleton keypoint sequence to the three-dimensional skeleton keypoint sequence, and performing action rendering for the target object based on the three-dimensional skeleton keypoint sequence, action transfer is achieved without realizing action transfer directly at the level of pixel. In this way, the problem that the initial video and the target video greatly differ in structure and view angle (especially, when the initial object makes an extreme action, or the initial object and the target object greatly differ in structure) is mitigated, and the accuracy of the action transfer can be improved. Moreover, in this aspect, by retargeting the three-dimensional skeleton keypoint sequence from the two-dimensional skeleton keypoint sequence, the use of three-dimensional keypoint estimation and retargeting with a large error in an action transfer is avoided, and thus helping improve the accuracy of the action transfer.
In a possible implementation, converting the two-dimensional skeleton keypoint sequence of the initial object into the three-dimensional skeleton keypoint sequence of the target object includes: determining an action transfer component sequence of the initial object based on the two-dimensional skeleton keypoint sequence of the initial object; and determining the three-dimensional skeleton keypoint sequence of the target object based on the action transfer component sequence of the initial object.
In this implementation, the three-dimensional skeleton keypoint sequence is retargeted by using the action transfer component sequence orthogonally decomposed based on the two-dimensional skeleton keypoint sequence. This way, the use of the three-dimensional keypoint estimation and retargeting with a large error in an action transfer is avoided, thus helping to improve the accuracy of the action transfer.
In a possible implementation, before determining the three-dimensional skeleton keypoint sequence of the target object, the action transfer method further includes: obtaining a second initial video involving the target object; and identifying a two-dimensional skeleton keypoint sequence of the target object from multiple frames of image in the second initial video, determining the three-dimensional skeleton keypoint sequence of the target object based on the action transfer component sequence of the initial object includes: determining an action transfer component sequence of the target object based on the two-dimensional skeleton keypoint sequence of the target object; determining a target action transfer component sequence based on the action transfer component sequence of the initial object and the action transfer component sequence of the target object; and determining the three-dimensional skeleton keypoint sequence of the target object based on the target action transfer component sequence.
In this implementation, the three-dimensional skeleton keypoint sequence is determined by fusing the action transfer component sequence orthogonally decomposed based on the two-dimensional skeleton keypoint sequence of the initial object and the action transfer component sequence orthogonally decomposed based on the two-dimensional skeleton keypoint sequence of the target object. In this case, when the initial object makes an extreme action or the initial object and the target object greatly differ in structure, the low accuracy of the action transfer can be overcome.
In a possible implementation, the action transfer component sequence of the initial object includes a motion component sequence, an object structure component sequence and a photographing angle component sequence, and determining the action transfer component sequence of the initial object based on the two-dimensional skeleton keypoint sequence of the initial object includes: for each of the multiple frames of image in the first initial video, determining motion component information, object structure component information and photographing angle component information corresponding to the frame of image based on a two-dimensional skeleton keypoint corresponding to the frame of image; determining the motion component sequence based on the motion component information corresponding to each of the multiple frames of image in the first initial video; determining the object structure component sequence based on the object structure component information corresponding to each of the multiple frames of image in the first initial video; and determining the photographing angle component sequence based on the photographing angle component information corresponding to each of the multiple frames of image in the first initial video.
In this implementation, the action transfer component sequence may include a plurality of orthogonal component sequences. The three-dimensional skeleton keypoint sequence is determined by using of the plurality of orthogonal component sequences, such that the low accuracy of the action transfer can be further overcome when the initial object makes an extreme action or the initial object and the target object greatly differ in structure.
In a possible implementation, generating the target video involving the action sequence of the target object based on the three-dimensional skeleton keypoint sequence of the target object includes: generating a two-dimensional target skeleton keypoint sequence of the target object based on the three-dimensional skeleton keypoint sequence of the target object; and generating the target video involving the action sequence of the target object based on the two-dimensional target skeleton keypoint sequence of the target object.
In this implementation, the two-dimensional target skeleton keypoint sequence is obtained by performing re-projection on the reconstructed three-dimensional skeleton keypoint sequence. In this way, the use of three-dimensional keypoint estimation and retargeting with a large error in an action transfer is avoided, which helps to improve the accuracy of the action transfer.
In a possible implementation, converting the two-dimensional skeleton keypoint sequence of the initial object into the three-dimensional skeleton keypoint sequence of the target object includes: converting the two-dimensional skeleton keypoint sequence of the initial object into the three-dimensional skeleton keypoint sequence of the target object by using an action transfer neural network.
In this implementation, determining the three-dimensional skeleton keypoint sequence of the target object by using a trained action transfer neural network can improve the efficiency and the accuracy of keypoint retargeting.
In a possible implementation, the above action transfer method further includeing training the action transfer neural network by: obtaining a sample motion video involving an action sequence of a sample object; by using the action transfer neural network that is to be trained, identifying a first sample two-dimensional skeleton keypoint sequence of the sample object from multiple frames of sample image in the sample motion video; obtaining a second sample two-dimensional skeleton keypoint sequence by performing limb scaling for the first sample two-dimensional skeleton keypoint sequence; determining a loss function based on the first sample two-dimensional skeleton keypoint sequence and the second sample two-dimensional skeleton keypoint sequence; and adjusting at least one network parameter of the action transfer neural network based on the loss function.
In this implementation, the action transfer neural network is trained by using the loss function constructed by using the first sample two-dimensional skeleton keypoint sequence of the sample object and the second sample two-dimensional skeleton keypoint sequence obtained by performing limb scaling for the sample object. In this case, when the initial object and the target object differ greatly in structure, the accuracy of the action transfer can be improved. Furthermore, when the above action transfer neural network is trained, paired action-role data in the real world is not used. Thus, unsupervised construction of the loss function and unsupervised training of the action transfer neural network are achieved, which helps to improve the accuracy of the trained action transfer neural network in performing action transfer.
In a possible implementation, determining the loss function based on the first sample two-dimensional skeleton keypoint sequence and the second sample two-dimensional skeleton keypoint sequence includes: determining a first sample action transfer component sequence based on the first sample two-dimensional skeleton keypoint sequence; determining a second sample action transfer component sequence based on the second sample two-dimensional skeleton keypoint sequence; determining an estimated three-dimensional skeleton keypoint sequence based on the first sample action transfer component sequence; and determining the loss function based on the first sample action transfer component sequence, the second sample action transfer component sequence and the estimated three-dimensional skeleton keypoint sequence.
In this implementation, the loss function is constructed by using the first sample action transfer component sequence orthogonally decomposed based on the first sample two-dimensional skeleton keypoint sequence, the second sample action transfer component sequence orthogonally decomposed based on the second sample two-dimensional skeleton keypoint sequence, and the estimated three-dimensional skeleton keypoint sequence reconstructed based on the first sample action transfer component sequence. In this way, when the initial object and the target object greatly differ in structure, the accuracy of the action transfer can be improved.
In a possible implementation, the loss function includes a motion invariant loss function, the first sample action transfer component sequence includes first sample motion component information, first sample structure component information and first sample angle component information corresponding to each frame of sample image, the second sample action transfer component sequence includes second sample motion component information, second sample structure component information and second sample angle component information corresponding to each frame of sample image.
Determining the loss function includes: based on the second sample motion component information, the first sample structure component information, and the first sample angle component information corresponding to each frame of sample image, determining first estimated skeleton keypoints corresponding to respective first sample two-dimensional skeleton keypoints in the first sample two-dimensional skeleton keypoint sequence; based on the first sample motion component information, the second sample structure component information, and the second sample angle component information corresponding to each frame of sample image, determining a second estimated skeleton keypoints corresponding to respective second sample two-dimensional skeleton keypoints in the second sample two-dimensional skeleton keypoint sequence; and based on each of the first estimated skeleton keypoints, each of the second estimated skeleton keypoints, the first sample motion component information included in the first sample action transfer component sequence, the second sample motion component information included in the second sample action transfer component sequence, and the estimated three-dimensional skeleton keypoint sequence, determining the motion invariant loss function.
In this implementation, based on the information obtained by orthogonally decomposing the first sample two-dimensional skeleton keypoint sequence and the second sample two-dimensional skeleton keypoint sequence, the first estimated skeleton keypoint is obtained by performing skeleton recovery for the sample object, and the second estimated skeleton keypoint is obtained by performing skeleton recovery for the sample object subjected to limb scaling; Next, the motion invariant loss function is constructed in combination with the recovered first estimated skeleton keypoint, the recovered second estimated skeleton keypoint and the reconstructed estimated three-dimensional skeleton keypoint sequence of the sample object. Although the sample object is subjected to change or perturbation in structure and view angle, motion information after transfer should be unchanged. Therefore, by constructing the motion invariant loss function and minimizing the motion invariant loss function during training, the accuracy of the constructed action transfer neural network in performing action transfer can be improved.
In a possible implementation, the loss function further includes a structure invariant loss function, determining the loss function further includes: screening out first sample two-dimensional skeleton keypoints in sample images corresponding to a first moment and a second moment from the first sample two-dimensional skeleton keypoint sequence; screening out second sample two-dimensional skeleton keypoints in the sample images corresponding to the second moment and the first moment from the second sample two-dimensional skeleton keypoint sequence; and based on the first sample two-dimensional skeleton keypoints in the sample images corresponding to the first moment and the second moment, the second sample two-dimensional skeleton keypoint in the sample images corresponding to the second moment and the first moment, and the estimated three-dimensional skeleton keypoint sequence, determining the structure invariant loss function.
In this implementation, the structure invariant loss function can be constructed by using the first sample two-dimensional skeleton keypoints and the second sample two-dimensional skeleton keypoints at different moments in combination with the reconstructed estimated three-dimensional skeleton keypoint sequence of the sample object. Because the structure of the sample object is unchanged over time, by constructing the structure invariant loss function and minimizing the motion invariant loss function and the structure invariant loss function during training, the accuracy of the constructed action transfer neural network in performing action transfer can be improved.
In a possible implementation, the loss function further includes a view-angle invariant loss function, and determining the loss function further includes: based on the first sample two-dimensional skeleton keypoints in the sample images corresponding to the first moment and the second moment, the first sample angle component information of the sample images corresponding to the first moment and the second moment, the second sample angle component information of the sample images corresponding to the first moment and the second moment and the estimated three-dimensional skeleton keypoint sequence, determining the view-angle invariant loss function.
In this implementation, the view-angle invariant loss function can be constructed by using the first sample two-dimensional skeleton keypoints at different moments and the reconstructed estimated three-dimensional skeleton keypiont sequence of the sample object and the like. Because the photographing view-angle of the sample object is unchanged along with change of motion and structure of the sample object, by constructing the view-angle invariant loss function and minimizing the view-angle invariant loss function, the motion invariant loss function and the structure invariant loss function during training, the accuracy of the constructed action transfer neural network in performing action transfer can be improved.
In a possible implementation, the loss function further includes a reconstruction recovery loss function, and determining the loss function further includes: determining the reconstruction recovery loss function based on the first sample two-dimensional skeleton keypoint sequence and the estimated three-dimensional skeleton keypoint sequence.
In this implementation, the reconstruction recovery loss function can be constructed by using the the first sample two-dimensional skeleton keypoint sequence and the reconstructed estimated three-dimensional skeleton keypoint sequence of the sample object. Because the sample object may remain unchanged when performing sample object recovery, by constructing the reconstruction recovery loss function, and minimizing the reconstruction recovery loss function, the view-angle invariant loss function, the motion invariant loss function and the structure invariant loss function during training, the accuracy of the constructed action transfer neural network in performing action transfer can be improved.
According to a second aspect of embodiments of the present disclosure, an action transfer apparatus is provided. The action transfer apparatus includes: at least one processor, and one or more memories coupled to the at least one processor and storing programming instructions for execution by the at least one processor to perform operations including: obtaining a first initial video involving an action sequence of an initial object; identifying a two-dimensional skeleton keypoint sequence of the initial object from plural frames of image in the first initial video; converting the two-dimensional skeleton keypoint sequence of the initial object into a three-dimensional skeleton keypoint sequence of a target object; and generating a target video involving an action sequence of the target object based on the three-dimensional skeleton keypoint sequence of the target object.
In a possible implementation, converting the two-dimensional skeleton keypoint sequence of the initial object into the three-dimensional skeleton keypoint sequence of the target object includes: determining an action transfer component sequence of the initial object based on the two-dimensional skeleton keypoint sequence of the initial object; and determining the three-dimensional skeleton keypoint sequence of the target object based on the action transfer component sequence of the initial object.
In a possible implementation, before determining the three-dimensional skeleton keypoint sequence of the target object, the operations further include: obtaining a second initial video involving the target object; and identifying a two-dimensional skeleton keypoint sequence of the target object from multiple frames of image in the second initial video, and determining the three-dimensional skeleton keypoint sequence of the target object based on the action transfer component sequence of the initial object includes: determining an action transfer component sequence of the target object based on the two-dimensional skeleton keypoint sequence of the target object; determining a target action transfer component sequence based on the action transfer component sequence of the initial object and the action transfer component sequence of the target object; and determining the three-dimensional skeleton keypoint sequence of the target object based on the target action transfer component sequence.
In a possible implementation, the action transfer component sequence of the initial object includes a motion component sequence, an object structure component sequence, and a photographing angle component sequence, and determining the action transfer component sequence of the initial object based on the two-dimensional skeleton keypoint sequence includes: for each of the multiple frames of image in the first initial video, determining motion component information, object structure component information, and photographing angle component information of the initial object based on a two-dimensional skeleton keypoint corresponding to the frame of image; determining the motion component sequence based on the motion component information corresponding to each of the multiple frames of image in the first initial video; determining the object structure component sequence based on the object structure component information corresponding to each of the multiple frames of image in the first initial video; and determining the photographing angle component sequence based on the photographing angle component information corresponding to each of the multiple frames of image in the first initial video.
According to a third aspect of embodiments of the present disclosure, a non-transitory computer readable storage medium is provided. The non-transitory computer readable storage medium coupled to at least one processor and having machine-executable instructions stored thereon that, when executed by the at least one processor, cause the at least one processor to perform operations including: obtaining a first initial video involving an action sequence of an initial object; identifying a two-dimensional skeleton keypoint sequence of the initial object from plural frames of image in the first initial video; converting the two-dimensional skeleton keypoint sequence of the initial object into a three-dimensional skeleton keypoint sequence of a target object; and generating a target video involving an action sequence of the target object based on the three-dimensional skeleton keypoint sequence of the target object.
The above apparatuses and non-transitory computer readable storage medium of the present disclosure at least contain technical features essentially identical or similar to the technical features of any one aspect or any one embodiment of any one aspect of the above methods of the present disclosure. Therefore, the descriptions about the effects of the above apparatuses, and non-transitory computer readable storage medium can be referred to the effect descriptions of the above methods and thus will not be repeated herein.
In order to make the technical solutions of the embodiments of the present disclosure clearer, the accompanying drawing involved in the descriptions of the embodiments will be briefly introduced below. It should be understood that the following drawings only show some embodiments of the present disclosure and thus shall not be taken as limitation to the scope of protection of the present disclosure. Those skilled in the art may obtain other relevant drawings based on these drawings without paying creative work.
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure clearer, the technical solutions of the embodiments of the present disclosure will be clearly and fully described in conjunction with the accompanying drawings in the embodiments of the present disclosure. It should be understood that the accompanying drawings in the present disclosure are merely used for the purpose of description and illustration rather than limiting the scope of protection of the present disclosure. In addition, it should be understood that the illustrative drawings are not drawn according to a scale of a real object. The flowcharts of the present disclosure illustrate operations of some embodiments in the present disclosure. It should be understood that the operations of the flowcharts may not be carried out in sequence and those steps without logic contextual relationship may be carried out in a reverse sequence or simultaneously. Furthermore, those skilled in the art may add one or more other operations to the flowcharts or remove one or more operations from the flowcharts under the guidance of the contents of the present disclosure.
In addition, the described embodiments are merely some of the embodiments of the present disclosure rather than all embodiments. Generally, the components of the embodiments of the present disclosure described in the accompanying drawings may be arranged and designed in different configurations. Furthermore, the following detailed descriptions of the embodiments of the present disclosure in the accompanying drawings are not intended to limit the claimed scope of the present disclosure but only represent some selected embodiments of the present disclosure. All other embodiments obtained by those skilled in the art based on these embodiments of the present disclosure without making creative work shall fall within the scope of protection of the present disclosure.
It is noted that the term “include” used in the embodiments of the present disclosure is used to indicate the presence of the features appearing after this term but not preclude other additional features.
The present disclosure provides an action transfer method and apparatus. Where, by extracting a two-dimensional skeleton keypoint sequence, retargeting the two-dimensional skeleton keypoint sequence to a three-dimensional skeleton keypoint sequence, and performing action rendering for a target object based on the three-dimensional skeleton keypoint sequence, action transfer is achieved without realizing action transfer directly at the level of pixel. In this way, the problem that an initial video and a target video greatly differ in structure and view angle (especially, when the initial object makes an extreme action, or the initial object and the target object greatly differ in structure) is mitigated, and the accuracy of the action transfer can be improved. Moreover, in the present disclosure, by retargeting the three-dimensional skeleton keypoint sequence from the two-dimensional skeleton keypoint sequence, the use of three-dimensional keypoint estimation and retargeting with a large error in an action transfer is avoided, and thus helping improve the accuracy of the action transfer.
A method, an apparatus, a device and a storage medium for action transfer in the present disclosure will be described below in combination with specific embodiments. The action transfer is between objects, e.g., an initial object and a target object. Each of the objects can be an animated object or a real object such as a person. In some examples, the initial object and the target object can be a same object, e.g., a same person. In some examples, the initial object and the target object are different objects, e.g., an animated object and a real person, or different persons.
An embodiment of the present disclosure provides an action transfer method. The method is applied to a terminal device or server or the like which performs action transfer. Specifically, as shown in
At step S110, a first initial video involving an action sequence of an initial object is obtained.
Herein, the first initial video includes multiple frames of image. The initial object may show a different posture in each frame of image, and all of these postures are combined to form the action sequence of the initial object.
At step S120, a two-dimensional skeleton keypoint sequence of the initial object is identified from multiple frames of image in the first initial video.
In order to determine the action sequence of the initial object, a two-dimensional skeleton keypoint of the initial object may be extracted from each frame of image in the first initial video. The two-dimensional skeleton keypoints corresponding to multiple frames of image respectively form the above two-dimensional skeleton keypoint sequence. Illustratively, the two-dimensional skeleton keypoint may include a keypoint corresponding to each joint of the initial object. The keypoints corresponding to various joints are combined together to form a skeleton of the initial object.
In a possible implementation, the two-dimensional skeleton keypoint of the initial object in each frame of image may be extracted by using a two-dimensional posture estimation neural network.
The above initial object may be a real human, a virtual human or animal or the like, which is not limited herein.
At step S130, the two-dimensional skeleton keypoint sequence is converted into a three-dimensional skeleton keypoint sequence of a target object.
In a possible implementation, an action transfer component sequence of the initial object may be firstly determined based on the two-dimensional skeleton keypoint sequence; and then, the three-dimensional skeleton keypoint sequence of the target object is determined based on the action transfer component sequence of the initial object.
Illustratively, the action transfer component sequence of the initial object includes at least one of a motion component sequence, an object structure component sequence or a photographing angle component sequence.
The motion component sequence represents a motion of the initial object, the object structure component sequence represents body shape of the initial object, and the photographing angle component sequence represents an angle of a camera.
In some embodiments, the motion component sequence, the object structure component sequence, and the photographing angle component sequence as described above may be formed in the following sub-steps.
At sub-step 1, motion component information, object structure component information, and photographing angle component information corresponding to each frame of image are respectively determined based on the two-dimensional skeleton keypoint corresponding to each of the multiple frames of image in the first initial video.
At sub-step 2, the motion component sequence is determined based on the motion component information corresponding to each of the multiple frames of image in the first initial video.
At sub-step 3, the object structure component sequence is determined based on the object structure component information corresponding to each of the multiple frames of image in the first initial video.
At sub-step 4, the photographing angle component sequence is determined based on the photographing angle component information corresponding to each of the multiple frames of image in the first initial video.
In the above steps, the motion component information, the object structure component information, and the photographing angle component information corresponding to each frame of image are respectively obtained by encoding the two-dimensional skeleton keypoint corresponding to each frame of image to semantically-orthogonal three vectors through a neural network. Afterwards, the motion component information corresponding to multiple frames of image is combined to form the motion component sequence, the object structure component information corresponding to multiple frames of image is combined to form the object structure component sequence, and the photographing angle component information corresponding to multiple frames of image is combined to form the photographing angle component sequence.
In the above three pieces of component information, each piece of component information is unchanged relative to the other two pieces of component information.
In these steps, the three-dimensional skeleton keypoint sequence is retargeted by using the action transfer component sequence orthogonally decomposed based on the two-dimensional skeleton keypoint sequence. In this way, the use of three-dimensional keypoint estimation and retargeting with a large error in an action transfer is avoided, which helps to improve the accuracy of the action transfer. Further, when the initial object makes an extreme action or the initial object and the target object differ greatly in structure, the low accuracy of the action transfer can be further overcome.
At step S140, a target video involving an action sequence of the target object is generated based on the three-dimensional skeleton keypoint sequence.
After the three-dimensional skeleton keypoint sequence is determined, a two-dimensional target skeleton keypoint of the target object may be obtained by projecting a three-dimensional skeleton keypoint corresponding to each frame of image in the three-dimensional skeleton keypoint sequence back to a two-dimensional space, and the two-dimensional target skeleton keypoints corresponding to multiple frames of image form a two-dimensional target skeleton keypoint sequence. Afterwards, the target video involving the action sequence of the target object is generated based on the two-dimensional target skeleton keypoint sequence. The action sequence of the target object corresponds to the action sequence of the initial object.
In some embodiments, for generating the target video which includes the action sequence of the target object based on the two-dimensional target skeleton keypoint sequence, action rendering may be performed with each group of obtained two-dimensional target skeleton keypoints to obtain a posture of the target object corresponding to each frame of image, and the action sequence of the target object can be obtained by combining the postures in various frames of image sequentially.
Illustratively, the target video involving the action sequence of the target object may be generated by using a video rendering engine, based on the two-dimensional target skeleton keypoint corresponding to each frame of image.
As above, the two-dimensional target skeleton keypoint sequence is obtained by performing re-projection on the reconstructed three-dimensional skeleton keypoint sequence. In this way, the use of three-dimensional keypoint estimation and retargeting with a large error in an action transfer is avoided, which helps to improve the accuracy of the action transfer.
Illustratively, in the above step S130, orthogonal decomposition may be performed for the two-dimensional skeleton keypoint sequence by using a trained motion transfer neural network, and the three-dimensional skeleton keypoint sequence of the target object is determined by using the action transfer component sequence obtained by orthogonal decomposition.
The above motion transfer neural network may include three encoders and one decoder. Each encoder is used to perform component information extraction for each two-dimensional skeleton keypoint in the two-dimensional skeleton keypoint sequence to obtain the above motion component information, object structure component information and photographing angle component information. After the above component information is obtained, decoding is performed by using one decoder, an estimated three-dimensional skeleton keypoint of the target object is obtained through reconstruction, and finally the estimated three-dimensional skeleton keypoint is re-projected back to a two-dimensional space to obtain one three-dimensional skeleton keypoint in the above three-dimensional skeleton keypoint sequence.
It should be noted that, for determining the three-dimensional skeleton keypoint, the object structure component information and photographing angle component information obtained by the encoder via direct decoding, or the object structure component information and the photographing angle component information obtained by average pooling can be used. Specifically, the two-dimensional skeleton keypoints respectively corresponding to multiple continuous frames of image including the current frame of image are orthogonally decomposed to obtain the object structure component information and the photographing angle component information corresponding to each frame of image. Then, average pooling operation is performed for the object structure component information corresponding to each frame of image to obtain the final object structure component information corresponding to the current frame of image; average pooling operation is performed for the photographing angle component information corresponding to each frame of image to obtain the final photographing angle component information corresponding to the current frame of image. Finally, based on the motion component information obtained by direct decomposition, the object structure component information obtained by average pooling operation and the photographing angle component information obtained by average pooling operation, the three-dimensional skeleton keypoint corresponding to the current frame of image is determined.
In this embodiment, direct action transfer at the level of pixel is avoided, which reduces the problem that the first initial video and the target video greatly differ in structure and view angle, and improves the accuracy of the action transfer especially when the initial object makes an extreme action or the initial object and the target object differ greatly in structure. Furthermore, in the above embodiment, the extracted two-dimensional skeleton keypoints are orthogonally decomposed to the motion component information, the object structure component information and the photographing angle component information, so as to further mitigate the low accuracy of the action transfer when the initial object makes an extreme action or the initial object and the target object differ greatly in structure.
In the embodiments of the present disclosure, in order to further mitigate the low accuracy of the action transfer when the initial object makes an extreme action or the initial object and the target object differ greatly in structure, before the three-dimensional skeleton keypoint sequence of the target object is determined, a second initial video involving the target object is obtained and a two-dimensional skeleton keyponit sequence of the target object is identified from multiple frames of image in the second initial video.
Afterwards, when the three-dimensional skeleton keypoint sequence of the target object is determined, an action transfer component sequence of the target object is firstly determined based on the two-dimensional skeleton keyponit sequence of the target object; then, a target action transfer component sequence is determined based on the action transfer component sequence of the initial object and the action transfer component sequence of the target object; finally, the three-dimensional skeleton keypoint sequence of the target object is determined based on the target action transfer component sequence.
The action transfer component sequence of the target object is determined in the same manner as the action transfer component sequence of the initial object is determined: firstly, a two-dimensional skeleton keypoint of the target object is respectively extracted from each frame of image in the second initial video, and the two-dimensional skeleton keypoint in each frame of image is orthogonally decomposed to determine the motion component information, the object structure component information and the photographing angle component information of the target object. Finally, the motion component information corresponding to multiple frames of image is used to form the motion component sequence, the object structure component information corresponding to multiple frames of image is used to form the object structure component sequence, and the photographing angle component information corresponding to multiple frames of image is used to form the photographing angle component sequence.
In the above embodiment, the three-dimensional skeleton keypoint sequence of the target object is reconstructed by using the fused target action transfer component sequence, and then the two-dimensional target skeleton keypoint sequence of the target object is obtained by performing re-projection for the reconstructed three-dimensional skeleton keypoint sequence. In this way, the use of three-dimensional keypiont estimation and retargeting with a large error in an action transfer is avoided, thereby helping to improve the accuracy of the action transfer.
The action transfer method of the present disclosure is described below with a specific embodiment.
As shown in
At step 1, skeleton extraction operation: a two-dimensional skeleton keypoint sequence of an initial object is obtained by extracting a two-dimensional skeleton keypoint of the initial object from each frame of image in a first initial video; a two-dimensional skeleton keypoint sequence of a target object is obtained by extracting a two-dimensional skeleton keypoint of the target object from each frame of image in a second initial video.
At step 2, action transfer operation: each two-dimensional skeleton keypoint in the two-dimensional skeleton keypoint sequence of the initial object and each two-dimensional skeleton keypoint in the two-dimensional skeleton keypoint sequence of the target object are subjected to encoding process respectively, that is, performing orthogonal decomposition, such that motion component information, object structure component information and photographing angle component information corresponding to each two-dimensional skeleton keypoint or each frame of image for the initial object, as well as motion component information, object structure component information and photographing angle component information corresponding to each two-dimensional skeleton keypoint or each frame of image for the target object may be obtained.
The motion component information corresponding to multiple frames of image for the initial object forms the motion component sequence of the initial object, the object structure component information corresponding to multiple frames of image for the initial object forms the object structure component sequence of the initial object, and the photographing angle component information corresponding to multiple frames of image for the initial object forms the photographing angle component sequence of the initial object. The motion component sequence, the object structure component sequence and the photographing angle component sequence of the initial object form the action transfer component sequence of the initial object.
Likewise, The motion component information corresponding to multiple frames of image for the target object forms the motion component sequence of the target object, the object structure component information corresponding to multiple frames of image for the target object forms the object structure component sequence of the target object, and the photographing angle component information corresponding to multiple frames of image for the target object forms the photographing angle component sequence of the target object. The motion component sequence, the object structure component sequence and the photographing angle component sequence of the target object form the action transfer component sequence of the target object.
Afterwards, a target action transfer component sequence is determined based on the action transfer component sequence of the initial object and the action transfer component sequence of the target object; a three-dimensional skeleton keypoint sequence of the target object is determined based on the target action transfer component sequence.
Illustratively, recombined target motion component information, target structure component information and target angle component information may be obtained by recombining the motion component information, the object structure component information and the photographing angle component information corresponding to each frame of image for the initial object with the motion component information, the object structure component information and the photographing angle component information corresponding to each frame of image for the target object.
The target motion component information corresponding to multiple frames of image may form a target motion component sequence, the target structure component information corresponding to multiple frames of image may form a target object structure component sequence, and the target angle component information corresponding to multiple frames of image may form a target photographing angle component sequence. The target motion component sequence, the target object structure component sequence and the target photographing angle component sequence form the above target action transfer component sequence.
Next, three-dimensional skeleton keypoints corresponding to one frame of image for the target object at three preset angles may be obtained by performing decoding operation for the target motion component information, the target structure component information and the target angle component information. The three-dimensional skeleton keypoints of multiple frames of image form the above three-dimensional skeleton keypoint sequence.
Finally, a two-dimensional target skeleton keypoint of the target object at each preset angle may be obtained by re-projecting the three-dimensional skeleton keypoint at each preset angle back to a two-dimensional space.
At step 3, skeleton-video rendering operation: based on the two-dimensional target skeleton keypoint of the target object at each preset angle in each frame of image, a target action of the target object at each preset angle is determined and a target video of the target object at each preset angle is generated based on the target action.
In the above embodiment, the accuracy of the action transfer can be significantly improved, and the action transfer can be achieved at any angle. Moreover, when the target object and the initial object greatly differ in structure, or the initial object makes an extreme action, an accurate action transfer can be still achieved, thus obtaining good visual effect.
At present, because a motion presents complex nonlinearity and it is difficult to find paired action-role data in the real world, it is difficult to build an accurate action transfer model to achieve the above action transfer, resulting in low accuracy of the action transfer. In order to mitigate the above defect, the present disclosure further provides a method of training an action transfer neural network. The method may be applied to the above terminal device or server which performs action transfer and may also be applied to a terminal device or server which trains a neural network separately. Specifically, as shown in
At step S310, a sample motion video involving an action sequence of a sample object is obtained.
At step S320, a first sample two-dimensional skeleton keypoint sequence of the sample object is identified from multiple frames of sample image in the sample motion video.
Herein, a first sample two-dimensional skeleton keypoint of the sample object is extracted from each frame of image in the sample motion video. The first sample two-dimensional skeleton keypoints extracted from multiple frames of sample image form the first sample two-dimensional skeleton keypoint sequence.
The above first sample two-dimensional skeleton keypoint may include a keypoint corresponding to each joint of the sample object. The keypoints corresponding to various joints are combined to form a skeleton of the sample object.
In a specific implementation, the first sample two-dimensional skeleton keypoint of the sample object may be extracted by using a two-dimensional posture estimation neural network.
The above sample object may be a real human, or a virtual human or animal or the like, which is not limited herein.
At step S330, a second sample two-dimensional skeleton keypoint sequence is obtained by performing limb scaling for the first sample two-dimensional skeleton keypoint sequence.
Herein, the second sample two-dimensional skeleton keypoint sequence is obtained by performing scaling for each first sample two-dimensional skeleton keypoint in the first sample two-dimensional skeleton keypoint sequence based on a preset scaling factor.
As shown in
At step S340, a loss function is determined based on the first sample two-dimensional skeleton keypoint sequence and the second sample two-dimensional skeleton keypoint sequence. Based on the loss function, a network parameter of the action transfer neural network is adjusted.
In a specific implementation, each first sample two-dimensional skeleton keypoint in the first sample two-dimensional skeleton keypoint sequence and each second sample two-dimensional skeleton keypoint in the second sample two-dimensional skeleton keypoint sequence are orthogonally decomposed. Thus, estimation of a three-dimensional skeleton keypoint sequence and recovery of a two-dimensional sample skeleton keypiont are performed based on the information obtained by decomposition and then the loss function is constructed based on the information obtained by decomposition, the estimated three-dimensional skeleton keypoint sequence and the recovered two-dimensional sample skeleton keypoint.
Herein, minimizing the value of the constructed loss function is the goal of training the action transfer neural network.
In this embodiment, the action transfer neural network is trained by using the loss function constructed by using the first sample two-dimensional skeleton keypoint sequence of the sample object and the second sample two-dimensional skeleton keypoint sequence obtained by performing limb scaling for the sample object. In this case, when the initial object and the target object differ greatly in structure, the accuracy of the action transfer can be improved. Furthermore, when the above action transfer neural network is trained, paired action-role data in the real world is not used. Thus, unsupervised construction of the loss function and unsupervised training of the action transfer neural network are achieved, which helps to improve the accuracy of the trained action transfer neural network in performing action transfer.
The above action transfer neural network may include three encoders and one decoder. The training of the action transfer neural network is essentially a training of the three encoders and one decoder.
In some embodiments, the loss function may be determined based on the first sample two-dimensional skeleton keypoint sequence and the second sample two-dimensional skeleton keypoint sequence in the following steps.
At step 1, a first sample action transfer component sequence is determined based on the first sample two-dimensional skeleton keypoint sequence.
First sample motion component information, first sample structure component information and first sample angle component information corresponding to each frame of sample image may be obtained by orthogonally decomposing each first sample two-dimensional skeleton keypoint in the first sample two-dimensional skeleton keypoint sequence. The first sample motion component information corresponding to multiple frames of sample image forms a first sample motion component sequence; the first sample structure component information corresponding to multiple frames of sample image forms a first sample structure component sequence; the first sample angle component information corresponding to multiple frames of sample image forms a first sample angle component sequence. The first sample motion component sequence, the first sample structure component sequence and the first sample angle component sequence form the above first sample action transfer component sequence.
Herein, as shown in
Final first sample structure component information
At step 2, a second sample action transfer component sequence is determined based on the second sample two-dimensional skeleton keypoint sequence.
Second sample motion component information, second sample structure component information and second sample angle component information corresponding to each frame of sample image may be obtained by orthogonally decomposing each second sample two-dimensional skeleton keypoint in the second sample two-dimensional skeleton keypoint sequence. The second sample motion component information corresponding to multiple frames of sample image forms a second sample motion component sequence; the second sample structure component information corresponding to multiple frames of sample image forms a second sample structure component sequence; the second sample angle component information corresponding to multiple frames of sample image forms a second sample angle component sequence. The second sample motion component sequence, the second sample structure component sequence and the second sample angle component sequence form the above second sample action transfer component sequence.
Herein, as shown in
Final second sample structure component information
At step 3, an estimated three-dimensional skeleton keypoint sequence is determined based on the first sample action transfer component sequence.
Herein, specifically, an estimated three-dimensional skeleton keypoint is determined based on the first sample motion component information, the first sample structure component information and the first sample angle component information corresponding to one frame of sample image. The estimated three-dimensional skeleton keypoints corresponding to multiple frames of sample image form the above estimated three-dimensional skeleton keypoint sequence.
Herein, specifically, a reconstructed estimated three-dimensional skeleton keypoint may be obtained by performing decoding for the first sample motion component information, the first sample structure component information and the first sample angle component information of one frame of sample image by using one decoder G.
At step 4, the loss function is determined based on the first sample action transfer component sequence, the second sample action transfer component sequence and the estimated three-dimensional keypoint sequence.
In a specific implementation, recovery of a two-dimensional sample skeleton keypoint is performed using the first sample motion component information, the first sample structure component information and the first sample angle component information in the first sample action transfer component sequence and the second sample motion component information, the second sample structure component information and the second sample angle component information in the second sample action transfer component sequence, and then the loss function is constructed by using the estimated three-dimensional skeleton keypoint sequence and the recovered two-dimensional sample skeleton keypoint.
In this implementation, the loss function is constructed by using the first sample action transfer component sequence orthogonally decomposed based on the first sample two-dimensional skeleton keypoint sequence, the second sample action transfer component sequence orthogonally decomposed based on the second sample two-dimensional skeleton keypoint sequence, and the estimated three-dimensional skeleton keypoint sequence reconstructed based on the first sample action transfer component sequence. In this way, when the initial object and the target object greatly differ in structure, the accuracy of the action transfer can be improved.
Although the sample object is subjected to change or perturbation in structure and view angle, motion information after transfer should be unchanged. Therefore, by constructing a motion invariant loss function and minimizing the motion invariant loss function during training to improve the accuracy of the constructed action transfer neural network in performing action transfer. Specifically, the above motion invariant loss function can be constructed in the following steps.
At step 1, a first estimated skeleton keypoint corresponding to the corresponding first sample two-dimensional skeleton keypoint in the first sample two-dimensional skeleton keypoint sequence is determined based on the second sample motion component information, the first sample structure component information and the first sample angle component information.
As shown in
At step 2, a second estimated skeleton keypoint corresponding to a corresponding second sample two-dimensional skeleton keypoint in the second sample two-dimensional skeleton keypoint sequence is determined based on the first sample motion component information, the second sample structure component information and the second sample angle component information.
As shown in
At steps 1 and 2, the first estimated skeleton keypoint {circumflex over (x)}′ and the second estimated skeleton keypoint {circumflex over (x)}″ are generated in the following formulas:
{dot over (x)}′=ϕ[G(Em(x′), Ēx(x)), 0]
{dot over (x)}″=ϕ[G(Em(x), Ēs(x′), Ēv(x′)), 0] (1)
In the above formulas, Ēs represents performing average pooling operation for the sample structure component information extracted by the encoder, and Ēv represents performing average pooling operation for the sample angle component information extracted by the encoder.
At step 3, the motion invariant loss function is determined based on the first estimated skeleton keypoint, the second estimated skeleton keypoint, the first sample motion component information, the second sample motion component information and the estimated three-dimensional skeleton keypoint sequence.
The motion invariant loss functions constructed can include the following three functions:
In the above formulas, N represents a frame number of sample motion videos, T represents a number of joints corresponding to a first sample two-dimensional skeleton keypoint, M represents a preset value, Cm represents an encoding length of the first sample motion component information, K represents a number of rotations of a sample object, {circumflex over (X)} represents an estimated three-dimensional skeleton keypoint, and ers, inv,m(s), inv,m(v) represent three motion invariant loss functions.
In embodiments of the present disclosure, based on the information obtained by orthogonally decomposing the first sample two-dimensional skeleton keypoint sequence and the second sample two-dimensional skeleton keypoint sequence, the first estimated skeleton keypoint is obtained by performing skeleton recovery for the sample object and the second estimated skeleton keypoint is obtained by performing skeleton recovery for the sample object subjected to limb scaling. Next, the motion invariant loss function is constructed in combination with the recovered first estimated skeleton keypoint, the recovered second estimated skeleton keypoint and the reconstructed estimated three-dimensional skeleton keypoint sequence of the sample object.
Because the structure of the sample object is unchanged over time, Therefore, by constructing a structure invariant loss function and minimizing the motion invariant loss function and the structure invariant loss function during training to improve the accuracy of the constructed action transfer neural network in performing action transfer. Specifically, the above structure invariant loss function can be constructed in the following steps.
At step 1, the first sample two-dimensional skeleton keypoint of the sample object at the first moment and the first sample two-dimensional skeleton keypoint of the sample object at the second moment are screened out from the first sample two-dimensional skeleton keypoint sequence.
The second sample two-dimensional skeleton keypoint of the sample object at the second moment and the second sample two-dimensional skeleton keypoint of the sample object at the first moment are screened out from the second sample two-dimensional skeleton keypoint sequence.
The above first sample two-dimensional skeleton keypoints are two-dimensional skeleton keypoints of the sample object which are respectively extracted from the sample images corresponding to the first moment t1 and the second moment t2 in the sample motion video, where the two-dimensional skeleton keypoints are not subjected to limb scaling. The above second sample two-dimensional skeleton keypoints are keypoints obtained by performing limb scaling for the skeleton keypoints of the sample object respectively extracted from the sample images corresponding to the first moment t1 and the second moment t2 in the sample motion video.
At step 2, the structure invariant loss function is determined based on the first sample two-dimensional skeleton keypoint of the sample object at the first moment, the first sample two-dimensional skeleton keypoint of the sample object at the second moment, the second sample two-dimensional skeleton keypoint of the sample object at the second moment, the second sample two-dimensional skeleton keypoint of the sample object at the first moment and the estimated three-dimensional skeleton keypoint sequence.
In a specific implementation, the structure invariant loss functions constructed include the following two functions:
In the formulas, St1 represents sample structure component information directly extracted from the first sample two-dimensional skeleton keypoint at the moment t1, St2 represents sample structure component information directly extracted from the first sample two-dimensional skeleton keypoint at the moment t2, St2′ represents sample structure component information directly extracted from the second sample two-dimensional skeleton keypoint at the moment t2, St1′ represents sample structure component information directly extracted from the second sample two-dimensional skeleton keypoint at the moment t1, Cb represents an encoding length corresponding to the first sample structure component information, m is a preset value, s( )represents a cosine similarity function, and trip_s and inv_s represent two structure invariant loss functions.
In an embodiment of the present disclosure, the structure invariant loss function can be constructed by using the first sample two-dimensional skeleton keypoints and the second sample two-dimensional skeleton keypoints at different moments in combination with the reconstructed estimated three-dimensional skeleton keypoint sequence of the sample object.
Because the photographing view-angle of the sample object is unchanged along with change of motion and structure of the sample object, by constructing a view-angle invariant loss function and minimizing the view-angle invariant loss function, the motion invariant loss function and the structure invariant loss function during training, the accuracy of the constructed action transfer neural network in performing action transfer can be improved. Specifically, the view-angle invariant loss function may be constructed in the following steps.
The view-angle invariant loss function is determined based on the first sample two-dimensional skeleton keypoint of the sample object at the first moment, the first sample two-dimensional skeleton keypoint of the sample object at the second moment, the first sample angle component information, the second sample angle component information, and the estimated three-dimensional skeleton keypoint sequence.
The view-angle invariant loss functions include the following two functions:
In the formulas, vt1 represents the sample angle component information directly extracted from the first sample two-dimensional skeleton keypoint at the moment t1, vt2 represents the sample angle component information directly extracted from the first sample two-dimensional skeleton keypoint at the moment t2, Cv represents an encoding length corresponding to the first sample angle component information, and £trip and £inv,υ represents two view-angle invariant loss functions.
Because the sample object should remain unchanged when performing sample object recovery, by constructing a reconstruction recovery loss function and minimizing the reconstruction recovery loss function, the view-angle invariant loss function, the motion invariant loss function and the structure invariant loss function during training, the accuracy of the constructed action transfer neural network in performing action transfer can be improved. Specifically, the reconstruction recovery loss function may be constructed in the following steps.
The reconstruction recovery loss function is determined based on the first sample two-dimensional skeleton keypoint sequence and the estimated three-dimensional skeleton keypoint sequence.
The reconstruction recovery loss functions constructed include the following two functions:
In the formulas, D represents a convolutional network on a time sequence, Ex˜ρx represents a probability distribution of x taken from a sample, then expectation is then solved for the subsequent function ½log D(x)+½log(1−D({dot over (x)}(k))), and £rec and £adv represent two reconstruction recovery loss functions.
In the above embodiments, the reconstruction recovery loss function, the view-angle invariant loss function, the motion invariant loss function and the structure invariant loss function are constructed. In a specific implementation, a target loss function may be obtained by fusing the above loss functions in the following formula.
=λrec£recλcrs£crs+λrec£adv+λtrip(£trip,x l +£trip,y)+λinv(£inv,y(s)+£inv,m(υ)+£inv,+£inv,N) (14)
In the formula, λrec, λcrs, λadv, λtrip and λinv all represent preset weights.
The action transfer neural network is trained by minimizing the target loss function.
Corresponding to the above action transfer method, the present disclosure further provides an action transfer apparatus. The apparatus is applied to a terminal device or server performing action transfer, and various modules therein can implement steps identical to the steps of the above method and achieve the same beneficial effects. Therefore, for the same parts therein, no redundant descriptions will be made herein.
As shown in
a video obtaining module 510, configured to obtain a first initial video involving an action sequence of an initial object;
a keypoint extracting module 520, configured to identify a two-dimensional skeleton keypoint sequence of the initial object from multiple frames of image in the first initial video;
a keypoint converting module 530, configured to convert the two-dimensional skeleton keypoint sequence into a three-dimensional skeleton keypoint sequence of a target object;
an image rendering module 540, configured to generate a target video involving an action sequence of the target object based on the three-dimensional skeleton keypoint sequence.
In some embodiments, when converting the two-dimensional skeleton keypoint sequence into the three-dimensional skeleton keypoint sequence of the target object, the keypoint converting module 530 is configured to: determine an action transfer component sequence of the initial object based on the two-dimensional skeleton keypoint sequence; determine the three-dimensional skeleton keypoint sequence of the target object based on the action transfer component sequence of the initial object.
In some embodiments, the video obtaining module 510 is further configured to obtain a second initial video involving the target object; the keypoint extracting module 520 is further configured to identify a two-dimensional skeleton keypoint sequence of the target object from multiple frames of image in the second initial video; when determining the three-dimensional skeleton keypoint sequence of the target object based on the action transfer component sequence of the initial object, the keypoint converting module 530 is configured to: determine an action transfer component sequence of the target object based on the two-dimensional skeleton keypoint sequence of the target object; determine a target action transfer component sequence based on the action transfer component sequence of the initial object and the action transfer component sequence of the target object; determine the three-dimensional skeleton keypoint sequence of the target object based on the target action transfer component sequence.
In some embodiments, the action transfer component sequence of the initial object includes a motion component sequence, an object structure component sequence, and a photographing angle component sequence; when determining the action transfer component sequence of the initial object based on the two-dimensional skeleton keypoint sequence, the keypoint converting module 530 is configured to: respectively determine motion component information, object structure component information, and photographing angle component information corresponding to each frame of image based on a two-dimensional skeleton keypoint corresponding to each of the multiple frames of image in the first initial video; determine the motion component sequence based on the motion component information corresponding to each of the multiple frames of image in the first initial video; determine the object structure component sequence based on the object structure component information corresponding to each of the multiple frames of image in the first initial video; determine the photographing angle component sequence based on the photographing angle component information corresponding to each of the multiple frames of image in the first initial video.
An embodiment of the present disclosure provides an electronic device. As shown in
The machine readable instructions are executed by the processor 601 to implement the steps of the action transfer method: obtaining a first initial video involving an action sequence of an initial object; identifying a two-dimensional skeleton keypoint sequence of the initial object from multiple frames of image in the first initial video; converting the two-dimensional skeleton keypoint sequence into a three-dimensional skeleton keypoint sequence of a target object; generating a target video involving an action sequence of the target object based on the three-dimensional skeleton keypoint sequence.
In addition to the above, the machine readable instructions are executed by the processor 601 to further implement method contents of any one embodiment described in the above method part and thus no redundant descriptions are made herein.
An embodiment of the present disclosure further provides a computer program product corresponding to the above method and apparatus. The computer program product includes computer readable storage medium storing program codes. Instructions included in the program codes may be used to implement the method in the above method embodiments. The specific implementation may be referred to the method embodiments and no redundant descriptions are made herein.
The differences between various embodiments are stressed in the above descriptions of various embodiments, with same or similar parts referred to each other. Therefore, for brevity, no redundant descriptions are made herein.
Those skilled in the art may clearly understand that for convenience and simplification of descriptions, the specific working processes of the above system and apparatus may be referred to the corresponding processes of the above method embodiments and will not be repeated herein. In the multiple embodiments of the present disclosure, it should be understood that the system, apparatus and method disclosed herein may be implemented in another way. The apparatus embodiments described above are merely illustrative, for example, the division of the modules is only a logic functional division and the modules may be divided another way in actual implementation. For another example, a plurality of modules or components may be combined or integrated into another system or some features therein may be neglected or not executed. Furthermore, those mutual couplings, or direct couplings or communication connections shown or discussed herein may be made through some communication interfaces, and indirect couplings or communication connections between apparatuses or modules are in electrical or mechanical form or the like.
The modules described as separate members may be or not be physically separated, and the members displayed as modules may be or not be physical units, e.g., may be located in one place, or may be distributed to a plurality of network units. Part or all of the modules may be selected according to actual requirements to implement the objectives of the solutions in the embodiments.
Furthermore, various functional units in various embodiments of the present disclosure may be integrated into one processing unit, or may be present physically separately, or two or more units thereof may be integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a processor-executable non-volatile computer-readable storage medium. Based on such understanding, the technical scheme of the present disclosure essentially or a part contributing to the prior art or part of the technical scheme may be embodied in the form of a software product, the software product is stored in a storage medium, and includes multiple instructions for enabling a computer device (such as a personal computer, a server or a network device) to execute all or part of the steps of the method disclosed by the embodiments of the present disclosure; and the above storage mediums include various mediums such as a USB disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a diskette or a compact disk and the like which may store program codes.
The above descriptions are merely specific embodiments of the present disclosure to which the scope of protection of the present disclosure is not limited. Any changes or substitutions that easily occur to those skilled in the art in the technical scope of the present disclosure should fall in the scope of protection of the present disclosure. Therefore, the scope of protection of the present disclosure is indicated as in appended claims.
Number | Date | Country | Kind |
---|---|---|---|
202010243906.1 | Mar 2020 | CN | national |
The present application is a continuation of International Application No. PCT/CN2021/082407 filed on Mar. 23, 2021, which claims a priority of the Chinese patent Application No. 202010243906.1 filed on Mar. 31, 2020, all of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2021/082407 | Mar 2021 | US |
Child | 17555965 | US |