Not applicable.
Not applicable.
Not applicable.
Not applicable.
1. Field of the Invention
The present invention relates generally to a three-dimensional (3D) character and a production method thereof, and more particularly to an innovative animatable 3D character with a skin surface and an internal skeleton.
2. Description of Related Art Including Information Disclosed Under 37 CFR 1.97 and 37 CFR 1.98
With the advancement of computer graphics and information technology, animation and simulation become more and more important in the industry, and the demand for digital human models rises.
The digital human model is usually composed of static attributes (e.g. anthropometric information, appearance) and dynamic attributes (e.g. biomechanical model, physiological model). But related research and technologies often focus on only one of these two categories. A digital human model with both the static and dynamic attributes is rarely seen.
In the development of static attributes of the digital human model, anthropometric information, such as body height or other dimensions was used to represent the attributes. In this way, evaluations can be made by using very simple geometry. However, this kind of model produces lower similarity to the real human. In order to make it more real, the 3D scanner has been widely used for modeling. Some related studies built models by establishing triangular meshes directly based on the relationship between data points, while others used key landmarks as control points to generate smooth surfaces. Nevertheless, no matter which method is used, the produced model is static and not animatable.
In the development of dynamic attributes of the digital human model, related studies have established various mathematical models to simulate human motion. However, the applications were limited to numerical results without intuitive presentations. To overcome this problem, other studies use a skeletal framework to represent the human body, which can visualize the process of simulation and the results of evaluations. However, it lacks a skin surface for the model. Thus, it is somehow different from the real human.
The Taiwan Patent (No. 94132645) entitled “Automated landmark extraction from three-dimensional whole body scanned data” is an invention by the present inventors, having a corresponding patent application in the U.S. Patent and Trademark Office published as U.S. Patent Publication No. 20060171590. This invention is used to define key landmarks from 3D scanned data. But the data outputs are without relationships. Hence, the present invention can be considered as an extension of that invention, which utilizes the data outputs for generating an animatable 3D character.
British Patent No. GB 2389 500 A, entitled “Generating 3D body models from scanned data”, also uses scanned data to establish skin surface for the 3D body models. But the models are static and not animatable. Furthermore, U.S. Pat. No. 6,384,819, entitled “System and method for generating an animatable character”, establishes a customized animatable model with a skeletal framework, but such models are limited to two-dimensional movements.
Thus, to overcome the aforementioned problems of the prior art, it would be an advancement in the art to provide an improved structure that can significantly improve efficacy.
To this end, the inventors have provided the present invention of practicability after deliberate design and evaluation based on years of experience in the production, development and design of related products.
The present invention mainly uses a 3D scanner to generate the skin surface of a 3D character, with relatively high similarity to a real human. In addition, by controlling the end points of the internal skeleton, the skin surface can be driven for animation. Thus, the static and dynamic attributes of the 3D character can be integrated, so that it can be better applied in related domains such as computer animations and ergonomic evaluations. The appearance can be represented by the smooth skin surface generated by the 3D scanner. The internal skeleton can also be obtained from 3D scanned data. In this way, the locations of body joints and end points of body segments on the internal skeleton can be close to their actual positions, so that the accuracy of motions can be enhanced.
Although the invention has been explained in relation to its preferred embodiment, it is to be understood that many other possible modifications and variations can be made without departing from the spirit and scope of the invention as hereinafter claimed.
The features and the advantages of the present invention will be more readily understood upon a thoughtful deliberation of the following detailed description of a preferred embodiment of the present invention with reference to the accompanying drawings.
A skin surface 10 has a preset 3D appearance. The skin surface 10 is not limited to a human appearance. It can also have an animal or a cartoon appearance.
An internal skeleton 20 matches the appearance of the skin surface. The internal skeleton 20 is combined with the skin surface 10.
There is an animation mechanism, so that the skin surface 10 and the internal skeleton 20 can generate interrelated motions.
The present invention uses 3D scanned data to generate an animatable 3D character, which is systematically composed of the skin surface 10 and the internal skeleton 20.
1. Using Scanned Point Data to Generate the Skin Surface
In this stage, the skin surface is mainly generated in a sequence from points to lines and then from lines to a surface. As shown in
2. Establishing the Internal Skeleton
Landmark extraction methods such as silhouette analysis, minimum circumference determination, gray-scale detection, human-body contour plots as disclosed by the present inventors in US Patent Publication No. 20060171590, can be used to identify major body joints 21 and the end points of body segments 22 (see
3. Combining the Skin Surface 10 and the Internal Skeleton 20 to Generate the Animation Mechanism
After generating the skin surface 10 and the internal skeleton 20 of the 3D character, the last step is to combine them. When the internal skeleton 20 is manipulated, the skin surface 10 can be driven to generate motions. The control points of the skin surface can move along with the corresponding joints of the internal skeleton. Depending on the relative positions and relationships, the degrees of influence on the skin surface by the internal skeleton are different. Hence, it can be used to define the “influence weight” of different joints on the skin surface. Then the motions can be simulated with both the skin surface and the internal skeleton.
As shown in
In the end, the method disclosed by the present invention can be integrated into computer animation software, i.e., to simulate various motions with the 3D character generated by using 3D scanned data. By comparing the generated motions and real ones frame by frame, they were found to be very similar. In addition, while comparing the positions of the body joints and the lengths of body segments between both generated and real characters, it is shown that there were very slight but acceptable differences. Therefore, either by subjective or objective methods, it is proven that the present invention is both practical and reliable.
The present invention can be applied in many fields.
1. Hardware and Software Providers of 3D Scanners
By using the 3D scanners, the present invention can extend its applications. It cannot only present an external appearance but also generate an animatable character by controlling of the internal skeleton. Thus, the enhanced functions can attract more users.
2. Product Design
By using the animatable character generated by the present invention, not only the fitness of products can be tested, but also more evaluations can be realized through simulations. For example, combining with virtual garments, not only the flexibility of the garments but also the results of moving with the garments can be tested
3. Work Station Design
For the manufacturing industry, when there is a need to create a new work station, the evaluations can be done in a virtual environment, which may involve the allocations of objects, the man-machine interactions, as well as the arrangement of work flow. Hence, cost and manpower can be greatly reduced.
4. Entertainment Industry
The production of movies, TV programs and electronic games depend more and more on the support of computer animations. By using the present invention to generate an animatable character, the players can closer to the virtual world.