This disclosure relates to computer animation and computer generated imagery. More specifically, this disclosure related to techniques for sharing shape information between computer models.
With the wide-spread availability of computers, animators and computer graphics artists can rely upon computers to assist in the animation and computer generated imagery process. This may include using computers to have physical models be represented by virtual models in computer memory. This may also include using computers to facilitate animation, for example, by the designing, posing, deforming, coloring, painting, or the like, of characters or other elements of a computer animation display.
Pioneering companies in the computer-aided animation/computer generated imagery (CGI) industry can include Pixar. Pixar is more widely known as Pixar Animation Studios, the creators of animated features such as “Toy Story” (1995) and “Toy Story 2” (1999), “A Bugs Life” (1998), “Monsters, Inc.” (2001), “Finding Nemo” (2003), “The Incredibles” (2004), “Cars” (2006), “Ratatouille” (2007), and others. In addition to creating animated features, Pixar develops computing platforms specially designed for computer animation and CGI, now known as RenderMan®. RenderMan® is now widely used in the film industry and the inventors have been recognized for their contributions to RenderMan® with multiple Academy Awards®.
One core functional aspect of RenderMan® software can include the use of a “rendering engine” to convert geometric and/or mathematical descriptions of objects or other models into images. This process is known in the industry as “rendering.” For movies or other features, a user (e.g., an animator or other skilled artist) specifies the geometric description of a model or other objects, such as characters, props, background, or the like that may be rendered into images. An animator may also specifying poses and motions for objects or portions of the objects. In some instances, the geometric description of objects may include a number of animation variables (avars), and values for the avars.
The production of animated features and CGI may involve the extensive use of computer graphics techniques to produce a visually appealing image from the geometric description of an object or model that can be used to convey an element of a story. One of the challenges in creating models for use in animated features can be balancing the desire for a visually appealing image of a character or other object with the practical issues involved in allocating the computational resources required to produce those visually appealing images. Often the geometric descriptions of objects or models at various stages in a feature film production environment may be rough and course, lacking the realism and detail that would be expected of the final production.
One issue with the production process is the time and effort involved when an animator undertakes to create the geometric description of a model and the models associated avars, rigging, shader variables, paint data, or the like. Even with models that lack the detail and realism expected of the final production, it may take several hours to several days for an animator to design, rig, pose, paint, or otherwise prepare the model for a given state of the production process. Further, although the model need not be fully realistic at all stages of the production process, it can be desirable that the animator or artist producing the model be able to modify certain attributes of the model at any stage. However, modifying the model during the production process may also involved significant time and effort. Often, there may not be sufficient time for desired modifications in order to maintain a release schedule.
Accordingly, what is desired is to solve problems relating to transferring information between meshes, some of which may be discussed herein. Additionally, what is desired is to reduce drawbacks related to transferring information between meshes, some of which may be discussed herein.
In various embodiments, data and other information of models can be shared to be combined to create new models or update features of existing models. A correspondence between pairs of meshes in a collection of meshes can be created. The correspondences may enable an animator or artist to share, blend, or combine information from a plurality of meshes. Mesh information and other data at, near, or otherwise associated with the models can be “pushed through” the correspondences and combined or blended with information from other models.
The correspondence between each pairs of the models can enable animators and other digital artists to create new characters from existing characters that may have different topologies and geometries. Additionally, the correspondence may be created between different versions of the same character, thereby allowing the animator to implement changes to characters at later stages of the production process and transfer information from prior versions thereby preserving previous work product and reducing the time and cost of updating the characters.
In some embodiments, a correspondence for sharing or transferring information between models can be generated based on a pair of “feature curve networks.” A correspondence can be generated using one or more geometric primitives (e.g., points, lines, curves, volumes, etc.) associated with a source surface, such as a portion of a source mesh, and corresponding geometric primitives associated with a destination surface. For example, a collection “feature curves” may be created to partition the source and destination surfaces into a collection of “feature regions” at “features” or other prominent aspects of a model. The resulting collections of partitions or “feature curve networks” can be used to construct a full surface correspondence between all points of the source mesh and all points of the destination mesh.
The information sharing between two or more meshes may unidirectional or bidirectional based on the correspondences. Thereby, information may be shared between two or more meshes, such as scalar fields, variables, controls, avars, articulation data, character rigging, shader data, lighting data, paint data, simulation data, topology and/or geometry, re-meshing information, map information, or the like.
In various embodiments, difference information between a plurality of meshes may be determined based on the correspondence. The difference information may be stored. For example, the difference information may generated and stored as a bump map. Alternatively, the difference between a set of meshes may be determined and information indicative of the difference may be generated and stored as a set of wavelet coefficients.
A further understanding of the nature, advantages, and improvements offered by those inventions disclosed herein may be realized by reference to remaining portions of this disclosure and any accompanying drawings.
In order to better describe and illustrate embodiments and/or examples of any inventions presented within this disclosure, reference may be made to one or more accompanying drawings. The additional details or examples used to describe the accompanying drawings should not be considered as limitations to the scope of any of the disclosed inventions, any of the presently described embodiments and/or examples, or the presently understood best mode of any invention presented within this disclosure.
Techniques and tools can be implemented that assist in the production of computer animation and computer graphics imagery. A mesh can be the structure that gives shape to a model. The mesh of a model may include, in addition to information specifying vertices and edges, various additional pieces of information. In various embodiments, point weight groups, shader variables, articulation controls, hair variables and styles, paint data, or the like, can be shared between meshes having different topologies and geometries. Information associated with a plurality of meshes can be blended for sharing with or transferring to the mesh of another character, even from characters with completely different topologies.
Design computer 110 can be any PC, laptop, workstation, mainframe, cluster, or the like. Object library 120 can be any database configured to store information related to objects that may be designed, posed, animated, simulated, rendered, or the like.
Object modeler 130 can be any hardware and/or software configured to model objects. Object modeler 130 may generate 2-D and 3-D object data to be stored in object library 120. Object simulator 140 can be any hardware and/or software configured to simulate objects. Object simulator 140 may generate simulation data using physically-based numerical techniques. Object renderer 150 can be any hardware and/or software configured to render objects. For example, object renderer 150 may generate still images, animations, motion picture sequences, or the like of objects stored in object library 120.
Motion of a model associated with mesh 200 may be realized by controlling mesh 200, for example by controlling vertices 230, 240, and 250. Polygons and vertices of mesh 200 may be individually animated by moving their location in space (x, y, z) for each displayed frame of a computer animation. Polygons and vertices of mesh 200 may also move together as group, maintaining constant relative position. Thus, for example, by raising vertices of mesh 200 by appropriate amounts at the corners of lips on the head of the human character, a smiling expression can be formed. Similarly, vertices of mesh 200 located at or near features or other prominent aspects of the model created by mesh 200, such as eyebrows, cheeks, forehead, etc. may be moved to deform the head of the human character to form a variety of expressions.
In addition to controlling character deformations, information can be “attached to” mesh 200 to provide other functional and/or decorative purposes. For example, mesh 200 may be connected to skeletons, character rigging, or other animations controls and avars used to animate, manipulate, or deform the model via mesh 200. Further, fields of data and/or variables specifying color, shading, paint, texture, etc. can be located at certain vertices or defined over surfaces of mesh 200. As discussed above, constructing mesh 200 and placing all of this information on mesh 200 can be a time consuming process. This process may limit how many characters or other objects may be created, the topologies and geometries of those models, and what changes can be made during various stages in the production of animations, such as feature-length films.
In various embodiments, new models can be created and existing models can be more readily updated using techniques of this disclosure that allow animators to overcome some of the timing constraints involved in creating models. Additionally, the time and effort put into designing one model can be preserved allowing the prior work and effort performed by the animator to be shared with or copied to another model. In some embodiments, a correspondence can be created that allows information present at or on a mesh to be shared with another mesh. The correspondence can reduce the time required to create new models, or the update existing models at later stages of the production process. Thus, animation controls, rigging, shader and paint data, etc. can be authored once on a character, and shared or transferred to different version of the same character or to another character of completely different topology and geometry.
In the example of
Referring to
In various embodiments, one or more correspondences may be created that allow information associated with mesh 310 to be readily shared with or transferred to mesh 360. Scalar field 320, animations controls 330, topology/geometry data 340, and/or painter data 350 can be “pushed” through a correspondence between mesh 310 and mesh 360. For example, scalar field 320 can be transferred to mesh 360 to create scalar field 370. Thus, once correspondences are created between meshes, any information at or on one mesh may be shared with another mesh. This can allow sharing of information even if one mesh includes differing topologies and geometries from other meshes.
In further embodiments, correspondences may be created between pairs of meshes in a collection of meshes. Information associated with mesh 310 can be blended with information associated with mesh 360. For example, scalar field 320, animations controls 330, topology/geometry data 340, and/or painter data 350 can be “pushed” through the correspondence between mesh 310 and mesh 360 and blended or otherwise combined to create new data. The new data may be pushed back through the correspondence and used to update an existing mesh or pushed through another correspondence to create a variety of new characters. This can allow the combination and blending of information even if one mesh includes differing topologies and geometries from other meshes in the collection.
In step 420, a collection of meshes is received. The collection may include one or more meshes or references to a set of meshes. The collection may include meshes for models having identical, similar, or different topologies, geometries, or the like.
In step 430, correspondences between pairs of meshes are generated. Each correspondence between a pair of meshes can include functions, relationships, correlations, etc. between one or more points associated with a first mesh and one or more points associated with a second mesh. The correspondence may include a mapping from every location on or within a space near the first mesh to a unique location on or near the second mesh. The correspondence may map one or more points, curves, surfaces, regions, objects, or the like, associated with the first object to one or more points, curves, surfaces, regions, objects, or the like associated with the second object. The correspondence may include a surface correspondence and/or a volume correspondence.
In various embodiments, a parameterization is built for the pairs meshes over a common domain. This common parameter domain can then be used to build a global and continuous correspondence between all points of the source and destination surfaces. The basic framework of the parameterization may rely on user-supplied points, user-supplied curves, inferred discontinuities, or the like. In some embodiments, the parameterization may include a set of feature curves defining a feature curve network.
A correspondence is generated for each pair in collection. For example, correspondence 540 may be created between meshes 510 and 520, correspondence 550 may be created between meshes 510 and 530, and correspondence 560 may be created between meshes 520 and 530. The correspondences may be created using features curve networks, in which one or more feature curves may be user authored or automatically determined in response to parameterization information associated with a mesh. The correspondences may include one or more surfaces correspondences and/or one or more volume correspondences.
Returning to
Accordingly, correspondences may be created between pairs of meshes in a collection of meshes. Information associated with a plurality of meshes can be “pushed” through the correspondences and blended or otherwise combined to create combinations of data that reflect new topologies, geometries, scalar fields, hair styles, or the like that may be transferred to a mesh of new or existing models. Thus, information can be shared, combined, and blended between meshes that may include differing topologies and geometries from other meshes in a collection.
Blending function 630 receives topology information 610 and geometry information 620 for application to mesh 520. Since correspondences 540 and 560 provide full correspondences between all points between meshes 510 and 520, and between meshes 530 and 520, respectively, lending function 630 can apply blended or combined information to corresponding points on mesh 520. Blending function 630 may include one or more values, parameters, attributes, or the like for controlling the weighting, scaling, or transformation of the blending or transfer of common types or different types of information from other meshes.
Referring to
In
Using one or more correspondences between mesh 705 and mesh 745, the first topology of character 705 may be transferred to mesh 745 of character 740. The first topology information of character 705 may be blended with geometry information transferred from character 720 using one or more correspondences to create character 740. For example, a user or animator may use a correspondence to blend the first topology of character 705 with 60% of the geometry of character 720 to create character 740.
In various embodiments, accordingly, information from a plurality of meshes in a collection may be blended or combined using correspondences between pairs of the meshes. The combined information can be used to create combinations of data that reflect new topologies, geometries, scalar fields, hair styles, or the like that may be transferred to a mesh of new or existing models. Thus, information can be shared, combined, and blended between meshes that may include differing topologies and geometries from other meshes in a collection.
In one embodiment, computer system 800 can include monitor 810, computer 820, keyboard 830, user input device 840, computer interfaces 850, or the like. Monitor 810 may typically include familiar display devices, such as a television monitor, a cathode ray tube (CRT), a liquid crystal display (LCD), or the like. Monitor 810 may provide an interface to user input device 840, such as incorporating touch screen technologies.
Computer 820 may typically include familiar computer components, such as processor 860 and one or more memories or storage devices, such as random access memory (RAM) 870, one or more disk drives 880, graphics processing unit (GPU) 885, or the like. Computer 820 may include system bus 890 interconnecting the above components and providing functionality, such as inter-device communication.
In further embodiments, computer 820 may include one or more microprocessors (e.g., single core and multi-core) or micro-controllers, such as PENTIUM, ITANIUM, or CORE 2 processors from Intel of Santa Clara, Calif. and ATHLON, ATHLON XP, and OPTERON processors from Advanced Micro Devices of Sunnyvale, Calif. Further, computer 820 may include one or more hypervisors or operating systems, such as WINDOWS, WINDOWS NT, WINDOWS XP, VISTA, or the like from Microsoft or Redmond, Wash., SOLARIS from Sun Microsystems, LINUX, UNIX, and UNIX-based operating system.
In various embodiments, user input device 840 may typically be embodied as a computer mouse, a trackball, a track pad, a joystick, a wireless remote, a drawing tablet, a voice command system, an eye tracking system, or the like. User input device 840 may allow a user of computer system 800 to select objects, icons, text, user interface widgets, or other user interface elements that appear on monitor 810 via a command, such as a click of a button or the like.
In some embodiments, computer interfaces 850 may typically include a communications interface, an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL) unit, FireWire interface, USB interface, or the like. For example, computer interfaces 850 may be coupled to a computer network, to a FireWire bus, a USB hub, or the like. In other embodiments, computer interfaces 850 may be physically integrated as hardware on the motherboard of computer 820, may be implemented as a software program, such as soft DSL or the like, or may be implemented as a combination thereof.
In various embodiments, computer system 800 may also include software that enables communications over a network, such as the Internet, using one or more communications protocols, such as the HTTP, TCP/IP, RTP/RTSP protocols, or the like. In some embodiments, other communications software and/or transfer protocols may also be used, for example IPX, UDP or the like, for communicating with hosts over the network or with a device directly connected to computer system 800.
RAM 870 and disk drive 880 are examples of machine-readable articles or computer-readable media configured to store information, such as computer programs, executable computer code, human-readable source code, shader code, rendering enginges, or the like, and data, such as image files, models including geometrical descriptions of objects, ordered geometric descriptions of objects, procedural descriptions of models, scene descriptor files, or the like. Other types of computer-readable storage media or tangible machine-accessible media include floppy disks, removable hard disks, optical storage media such as CD-ROMS, DVDs and bar codes, semiconductor memories such as flash memories, read-only-memories (ROMS), battery-backed volatile memories, networked storage devices, or the like.
In some embodiments, GPU 885 may include any conventional graphics processing unit. GPU 885 may include one or more vector or parallel processing units that may be user programmable. Such GPUs may be commercially available from NVIDIA, ATI, and other vendors. In this example, GPU 885 can include one or more graphics processors 893, a number of memories and/or registers 895, and a number of frame buffers 897.
As suggested,
Various embodiments of any of one or more inventions whose teachings may be presented within this disclosure can be implemented in the form of logic in software, firmware, hardware, or a combination thereof. The logic may be stored in or on a machine-accessible memory, a machine-readable article, a tangible computer-readable medium, a computer-readable storage medium, or other computer/machine-readable media as a set of instructions adapted to direct a central processing unit (CPU or processor) of a logic machine to perform a set of steps that may be disclosed in various embodiments of an invention presented within this disclosure. The logic may form part of a software program or computer program product as code modules become operational with a processor of a computer system or an information-processing device when executed to perform a method or process in various embodiments of an invention presented within this disclosure. Based on this disclosure and the teachings provided herein, a person of ordinary skill in the art will appreciate other ways, variations, modifications, alternatives, and/or methods for implementing in software, firmware, hardware, or combinations thereof any of the disclosed operations or functionalities of various embodiments of one or more of the presented inventions.
The disclosed examples, implementations, and various embodiments of any one of those inventions whose teachings may be presented within this disclosure are merely illustrative to convey with reasonable clarity to those skilled in the art the teachings of this disclosure. As these implementations and embodiments may be described with reference to exemplary illustrations or specific figures, various modifications or adaptations of the methods and/or specific structures described can become apparent to those skilled in the art. All such modifications, adaptations, or variations that rely upon this disclosure and these teachings found herein, and through which the teachings have advanced the art, are to be considered within the scope of the one or more inventions whose teachings may be presented within this disclosure. Hence, the present descriptions and drawings should not be considered in a limiting sense, as it is understood that an invention presented within a disclosure is in no way limited to those embodiments specifically illustrated.
Accordingly, the above description and any accompanying drawings, illustrations, and figures are intended to be illustrative but not restrictive. The scope of any invention presented within this disclosure should, therefore, be determined not with simple reference to the above description and those embodiments shown in the figures, but instead should be determined with reference to the pending claims along with their full scope or equivalents.
This application claims priority to and the benefit of U.S. Provisional Patent Application No. 61/030,796, filed Feb. 22, 2008 and entitled “Transfer of Rigs with Temporal Coherence,” the entire disclose of which is incorporated herein by reference for all purposes. This application may be related to the following commonly assigned applications: U.S. patent application Ser. No. ______ (Atty. Dkt. No. 021751-018800US, filed ______ and entitled “Mesh Transfer.” U.S. patent application Ser. No. ______ (Atty. Dkt. No. 021751-018900US, filed ______ and entitled “Mesh Transfer Using UV-Space.” U.S. patent application Ser. No. ______ (Atty. Dkt. No. 021751-019000US, filed ______ and entitled “Mesh Transfer in N-D Space.” The respective disclosures of these applications are incorporated herein by reference in their entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
61030796 | Feb 2008 | US |