1. Field of the Invention
The present invention relates to processing 3D data in order to produce image data and in order to produce further 3D data.
2. Description of the Related Art
Animation applications that create and render three-dimensional animation data in order to produce two-dimensional image data are often used in order to create image data that would be difficult or expensive to create using real-life actors and objects, to create animated TV shows or movies or for use in computer games. However, although these applications are used to create effects that would be difficult using film, conversely editing operations that would be easy using film are difficult for an animator to produce. For example, in order to produce slow motion the animation of every object in a scene must be slowed down by exactly the same amount. Moving backwards and forwards in time or between scenes involves moving all the animation curves to the correct place. All this can be extremely time consuming for an animator, especially when a typical animation includes hundreds of objects.
According to an aspect of the present invention, there is provided apparatus for processing 3D data, comprising data storage means, memory means and processing means, wherein said 3D data is stored in said memory means and said processing means is configured to evaluate said 3D data with respect to a first function of time in a first evaluation mode and with respect to a second function of time in a second evaluation mode.
Typically, the first stage in producing an animation is a story board, sketching the outline of a scene.
In order to animate such a scene an animator creates three-dimensional (3D) data. 3D data is a set of data that defines a number of assets, such as sets, objects and characters, and associates animation curves with some or all of them. An animation curve is a function over time that defines positions in space for a bone or vertex, or values for an attribute. When the 3D data is evaluated with respect to a particular time reference, image data, comprising a two-dimensional image, is produced, which can be viewed on display unit 202 or stored for later viewing. This is known as rendering.
Thus the animator creates the set, which includes the house and its furniture, garden path, garden fence and road. The man must be created as a character and then animated. The keys and the front door must be created as objects and constrained to the character's hand at certain times in order to allow the character to interact with them, and the car is created as an object and animated.
Getting all this right is a sophisticated task. However, finishing the scene is also not easy. Firstly, each of the different cameras must be set up such that they are turned on and off at specific times. Then the slow motion must be applied equally to the man, the car and the audio associated with each. Additionally, in order to make the opening of the front door realistic the animation of picture 104 must overlap slightly with that of picture 103, as a clean cut will look wrong to the viewer in the final Edit. Thus a small part of the animation must be repeated. In prior art systems this additional processing can be time-consuming.
An example of a data processing system suitable for creating and processing 3D data is shown in
Programs executed by computer system 201 are configured to display a simulated three-dimensional world space to a user via the visual display unit 202. Within this world-space, assets required for the image data may be shown and may be manipulated within the world space. Input data is received, possibly via mouse 203, to create or load assets such as sets, objects and characters, provide animation for the characters and objects and view the animation.
Components of computer 201 are detailed in
The system includes processing means provided by a central processing unit (CPU) 301, which fetches instructions for execution and manipulates data via a system bus 302 providing connectivity with Memory Controller Hub (MCH) 303. CPU 301 has a secondary cache 304 comprising 512 kilobytes of high speed static RAM for storing frequently-accessed instructions and data to reduce fetching operations from a larger main memory 305 via MCH 303. MCH 303 thus co-ordinates data and instruction flow with the main memory 305, which is at least one gigabyte in storage capacity, in this embodiment. Instructions and data are thus stored in memory means provided by main memory 305 and cache 304 for swift access by the CPU 301.
Storage means comprises a hard disk drive 306, providing non-volatile bulk storage of instructions and data via an Input/Output Controller Hub (ICH) 307. ICH 307 also provides connectivity to storage device 205. USB 2.0 interface 308 provides connectivity to manually-responsive input devices such as 103 and 104.
A graphics card 309 receives graphic data and instructions from CPU 301. Graphics card 309 is connected to MCH 303 by means of a high speed AGP graphics bus 310. A PCI interface 311 provides connections to a network card 312 that provides access to the network connection 209, over which instructions and or data may be transferred. A sound card 313 is also connected to the PCI interface 311 and receives sound data or instructions from the CPU 301.
Operations performed by the system illustrated in
At step 405 an existing project is loaded into memory from storage 306 if the user wishes to continue with an existing project; alternatively a new project is loaded. At step 406 the user modifies the project by creating and modifying 3D data, and at step 407 the project is saved. At step 408 data is exported if the project is finished—either as rendered image data or as 3D data—and at step 409 a question is asked as to whether there is another project to be modified. If this question is answered in the affirmative then control is returned to step 405 and another project is loaded. If, however, it is answered in the negative then at step 410 all running programs are terminated before the system shuts down at step 411.
The main memory 305 shown in
Project data 505 comprises 3D data 506, which includes data structures for the storage, animation and configuration of objects that are rendered and modified by the animation application instructions 502. These data structures include set, character and object definitions, animation curves, camera objects and lighting objects, video and audio files, lists of camera shots and so on. Project data 505 also includes time reference data 507, which is used by the animation application to provide a transformation between two different functions of time used in the application. Other data 508 includes temporary data structures used by the operating system 501 and other applications 503.
The time consuming processes detailed with respect to
Channel 603 contains several animation curves. In
The type of value of a keyframe depends on the attribute or object that the animation curve is animating. Thus defining the amount of a particular attribute may require only a single number, defining a position on a plane requires two, while defining a position within 3D space requires three. Thus the nature of a keyframe varies according to the type of object and the type of animation being provided by the animation curve.
Thus the different animations in channel 603 comprise the man walking into the room at 607, stopping and looking around at 608, reacting to seeing his keys at 609, walking over to the table and picking up his keys at 610, walking to the front door at 611, opening the front door at 612, and walking down the garden path at 613. Channel 604 contains a single animation curve 614 of the car driving down the road.
The audio files shown in channel 605 and 606 are also keyframed. For example, file 615 in channel 605 has a keyframe at the beginning that starts playback of the file and a keyframe at the end that stops it. There may be further keyframes in between that vary the speed of playback to match the footsteps to the actual movement of the man.
In contrast, channels 616 and 617 associated with Edit timeline 602 contains no animation. Channel 616 contains blocks representing a number of shots. Each shot is defined by a start and an end time, a camera associated with the shot and possibly a video or audio file. Each of the five shots relates to one of the pictures shown in
The two timelines 601 and 602 define two different functions of time within the application. Thus the time marker 618 on Action timeline 601 and time marker 619 on Edit timeline 602 refer to the same point in the animation, although they are at different times. Thus the use of Edit timeline 602 allows the user to perform Editing usually associated with an offline or online Editing suite, such as repeating animation, slow motion and camera switching, without altering the animation as defined with respect to Action timeline 601.
The dashed lines show the correspondences between the two timelines. Thus the first shot 620 runs from 0 to 3.5 seconds on Edit timeline 602 and shows the animation that also runs from 0 to 3.5 seconds on Action timeline 601, as seen through a particular camera. Shot 621 runs from 3.5 seconds to 6 seconds on Edit timeline 602, and shows the Action that runs from 3.5 to 6 seconds on Action timeline 601, but seen through a different camera. Shot 622 runs from 6 to 9 seconds on Edit timeline 602, and shows the animation that runs from 6 to 9 seconds in Action timeline 601.
Shot 623, however, although it runs from 9 seconds to 12 seconds in Edit timeline 602, shows the animation that runs from 8.7 to 11.7 seconds in Action timeline 601. Thus the repetition of the section of animation indicated by arrow 625 during the door-opening clip 612 is achieved without any actual copying of the 3D data. Adjustment of the size of the overlap is thus made extremely easy.
Shot 624 runs from 12 seconds to 20 seconds in Edit timeline 602, but shows the animation that runs from 11 seconds to 16 seconds in Action timeline 601. Thus, not only does the animation jump backwards by 0.7 seconds, it is also then stretched out to play back in slow motion. Again, this is achieved without any alteration of the animation. This ensures that the animations of the man and the car and their associated audio are all slowed down by exactly the same amount. The final second of animation is not played at all in Edit timeline 602.
The use of the two different timelines also makes the addition of voiceover or music very easy. Although the audio files in channels in 605 and 606, which are conceptually connected to the actual animation of the man and the car, are repeated and stretched in the same way, the audio in channel 617 is associated with the Edit timeline and thus plays smoothly over the top without being affected by these repetitions and stretching. Thus the application uses two variables, a current Action time and a current Edit time, but there may be more than one current Action time. Thus throughout this document when the current Action time is referred to it should be appreciated that the variable may take more than one value.
Backward time transformation table 702 gives the transformation necessary to take a time in Action timeline 601 to a corresponding time in Edit timeline 602. It contains the same four columns as forward transformation table 701. However, where in this example a time in the Edit timeline refers to a single Action time, a time in the Action timeline may refer to several Edit times. Thus the shot start and shot end values for the final three rows overlap. For example, an Action time of 8.8 seconds would correspond to an Edit time of 8.8 seconds using the transformation in row 1108. However, using the transformation in row 709, the same time also corresponds to an Edit time of 9.8 seconds. This is because the section of animation from 8.7 to 9 seconds is repeated.
Although it is not shown in this example, an Edit time may also refer to more than one Action time, since blending between shots can be achieved using the Edit timeline. Conceptually, when this occurs the Edit timeline is taking input from two different Action times and blending them, whereas an Action time that relates to two Edit times is providing input to create two different Edit times and therefore two different, although visually identical, pieces of animation data when the animation is played in Edit mode.
At step 803 audio files are added if required and at step 804 video files are added if required. An example of how video might be used in an application is as a background outside a window or as a scene playing on a television. Video data is any two-dimensional image data, whether animated or not, which is imported. It may, for example, have been created using an animation application, or it could be footage shot with a digital video camera or digitised from film.
At step 805 shots in the Edit timeline are created if required and at step 806 the animation thus created is played. At step 807 a question is asked as to whether the user wishes to continue modifying the project, and if this question is answered in the affirmative control is returned to step 801. If it is answered in the negative step 406 is concluded.
Action interface 906 includes Action timeline 601, the four Action channels 603 to 606, complete with their animation curves, and Action time marker 618. Edit interface includes Edit timeline 602, the Edit channels 616 and 617 and their associated shots, and Edit marker 619.
The animation, or image data, produced by evaluating the 3D data with respect to the time reference indicated by the position of Action marker 618 on Action timeline 601 is shown in viewer 902. The image data shown in the viewer may include images of many objects that do not have corresponding channels in the Action interface 906, such as the house, the garden, the road, the camera through which the scene is being viewed and the lights that are lighting it. This is because these objects are not animated; however, their definitions are part of 3D data 506 and thus their positions and appearances are calculated as part of the evaluation of the 3D data. It is, of course, possible to animate any object within a scene; a light may be animated to indicate the switching on or off of a light bulb or the movement of the sun, a camera may be animated in order to switch views or to pan, and so on.
Browser 903 contains a list of all assets which can be created and used in the animation. For example, it contains files that define basic actors and more complicated characters, household objects, sets, cameras and lights, and so on. When a user wishes to include such an object in his animation an instance of the required file is created and becomes a definition within 3D data 506. The browser 903 also includes more basic building blocks such as lines, two-dimensional and three-dimensional shapes and so on, allowing the user to build objects that are not already included in the asset browser. The browser 903 also includes files containing standard animation curves that may be applied to characters and objects.
Menu bar 904 includes transport controls 907 to play the animation, an Edit/Action switching button 908 which enables the user to switch between Action and Edit modes, a Key button 909 which facilitates the creating of keyframes, and menu button 910 which allows the user to access a variety of options, effects, plug-ins and other functions to improve the animation.
The interface and animation application described herein are provided as examples of how the invention may be embodied. However, the skilled reader will appreciate that the type of animation data used, the layout of the interface and the exact method of creating, modifying and rendering image data are not crucial to the invention. Animation application 502 is a simple application using only animation curves and constraints. However, other interfaces exist which produce animation in far more sophisticated ways. These could also be used in other embodiments of the invention.
At step 1005 constraints are added. Constraints are conditions placed on a part or whole of a character or object that are taken into account when evaluating an animation curve. A common constraint is that a character's feet may not pass through the floor, and thus even if an animation curve gives a position below the floor for a particular time reference, the constraint will not allow this to happen. Bones within an actor are constrained to each other such that when, for example, the hand is moved the rest of the arm also moves without having to be separately animated.
At step 1006 a question is asked as to whether the user wishes to continue animating and if this question is answered in the affirmative control is returned to step 1001. If it is answered in the negative then step 802 is concluded.
At step 1104 the user moves a part of the character or object to a desired position ands then selects Key button 909. At step 1105 the properties of the new keyframe are calculated, including the value the keyframe should take based on the changes made by the user at step 1004, and the values that the data field should take, such as the tangent in and out values. At step 1106 a keyframe having these properties and also having the time specified by the movement of Action marker 618 is inserted in the animation curve in the character or object channel. At step 1107 this channel is re-displayed so that the user can see the new keyframe.
At step 1108 a question is asked as to whether there is another keyframe to add and if this question is answered in the affirmative control is returned to step 1101. If it is answered in the negative then step 1004 is concluded.
If the question asked at step 1201 is answered in the affirmative, to the effect that the Edit marker was moved, then at step 1202 the Action time corresponding to the new position of Edit marker 619 is obtained from forward time transformation table 701. If the question asked at step 1201 is answered in the negative, to the effect that the Action marker 618 was moved then the current Action time is obtained from the new position of the marker and at step 1203 the current Edit time is obtained from backward time transformation table 702.
Following either of steps 1202 or 1203, at step 1204 the Action marker 618 is displayed at the current Action time on Action timeline 601, and the Edit time marker 619 is displayed at the current Edit time on Edit timeline 602. Thus, whichever time marker is moved the corresponding time in the other interface is calculated and the other time marker is also moved.
If, however, it is answered in the negative then at step 1302 the first record in forward time transformation table 701 is selected. At step 1303 a question is asked as to whether the value in column 702, indicating the start of the shot to which this record applies, is less than the current Edit time. If this question is answered in the affirmative then a further question is asked at step 1303 as to whether the value in column 703, indicating the end of the clip is greater than the current Edit time. If this question is also answered in the affirmative then the current Edit time falls within the shot start and finish times of the record and thus the current Action time is obtained by multiplying the Edit time by the scale value for that record and adding it to the offset at a step 1305. Alternatively, if the question asked at step 1304 is answered in the negative then both the start and end times of the record are earlier than the current Edit time, and so in this case or following step 1305 a question is asked at step 1306 as to whether there is another record in the table. If this question is answered in the affirmative control is returned to step 1302 and the next record in the table is selected, while an answer in the negative concludes step 1202.
If at any time the question asked at step 1303 is answered in the negative, to the effect that the shot start time of the current record is greater than the current Edit time, then since the records in table 701 are arranged in numerical order there is no further record containing the current Edit time. Thus a question is asked at step 1307 as to whether a current Action time has been determined. If this is answered in the affirmative then step 1202 is completed. If, however, it is answered in the negative then this means that there is no shot at the position of Edit marker 619. When this occurs the previous shot is looped and so at step 1309 the previous record is selected. At step 1309 a looped Edit time is set to be the shot end time subtracted from the current Edit time, added to the shot start time. The Action time is then calculated at step 1310 to be the looped Edit time multiplied by the scale of the selected record, with the whole added to the offset.
At step 1402 the first record in backward time transformation table 702 is selected and at step 1403 a question is asked as to whether the shot start time of this record is less than or equal to the current Action time. If this question is answered in the negative then the selected record is after the current Action time. Since the records are organised in numerical order by the shot start value in table 702, this means either that the current Edit time has already been found or that there is no Edit time corresponding to the new Action time. In the latter case, in contrast to the procedure used during step 1202, it is acceptable for the Edit time to be empty and the time marker 619 will not be displayed on Edit timeline 602.
Alternatively, if the question asked at step 1403 is answered in the affirmative then at step 1404 a further question is asked as to whether the shot end value is greater or equal to the current Action time. If this question is answered in the negative then control is directed to step 1410 at which a question is asked as to whether there is another record in the table, and if this question is answered in the affirmative control is returned to step 1402 and the next record is selected. If, however, it is answered in the affirmative then at step 1405 an Edit time corresponding to the current Action time is calculated as the product of the current Action time and the scale value in the current record, added to the offset value.
At step 1406 the difference between this Edit time and the previous position of the Edit time marker is calculated by taking the modulus of the variable MARK subtracted from this Edit time. At step 1407 a question is asked as to whether this modulus is less than the value of the variable DIFFERENCE. On the first iteration this question will always be answered in the affirmative and at step 1408 the value of the variable DIFFERENCE is set to be this modulus. The calculated Edit time is then saved as the current Edit time at step 1409. Alternatively, if the modulus is not less than DIFFERENCE on a subsequent iteration, steps 1408 and 1409 are bypassed. The question is then asked as to whether there is another record at step 1410, with an answer in the affirmative returning control to step 1402 and an answer in the negative concluding step 1203. Thus, following the final pass of these steps, the current Edit time will be the Edit time corresponding to the current Action time that is closest to the position of the previous Edit marker.
At step 1505 a question is asked as to whether there is another bone or vertex in the scene and if this question is answered in the affirmative control is returned to step 1501, while if it is answered in the negative then at step 1506 all assets in the scene are displayed. This step involves considering the point of view of the camera in use and translating every 3D position within the 3D world to a two-dimensional position on a plane, with respect to lighting and whether or not an object is visible behind another object. This two-dimensional data (image data) is then displayed in viewer 902.
Once animation data has been created at steps 801 to 804, the user can create shots in the Edit interface in order to control how the animation is played. Although changes to the animation may be made while the application is in Edit mode, changes made within the Edit interface cannot affect the actual animation. For example, they cannot change the appearance of a character, change his animation curve or change the way he interacts with other objects. Shots created within the Edit interface only affect how the animation is viewed; for example, by jumping forwards and backwards on the Action timeline the Edited animation can change the order of events, possibly repeating some events and leaving others out entirely. Time can be contracted or dilated. Additionally, shots in the Edit interface can be overlapped to create a blend, wipe, or other type of editing effect. This gives rise to the plurality of current Action times as described with reference to
Thus at step 805 detailed in
Modifications may include changes to the start and end time, the camera to be used or the video associated with it. If the shot is a proxy of animation data from another project then the user may, for example by “double-clicking” on the relevant shot within Edit interface 905, save and close the current project and load the project data that the shot refers to.
At step 1605 a question is asked as to whether this modification comprises movement of the shot in time, or movement of one of its boundaries. If this question is answered in the affirmative then the records in the time transformation tables 701 and 702 are altered at step 1606, following which, or if the question is answered in the negative, the shot itself is altered at step 1607.
At step 1608 a question is asked as to whether the user wishes to add more shots and if this question is answered in the affirmative control is returned to step 1601, while if it is answered in the negative step 805 is concluded.
The skilled reader will appreciate that this description of shot modification is only an example. Modifications to shots could be carried out using an interface such as described here, using a menu system, allowing the user to type in values, or any other interface that allows the user to specify that particular times in the Edit timeline are associated with times in the Action timeline.
Thus, if the question asked at step 1802 is answered in the affirmative, to the effect that time discontinuity is on, then at step 1804 a further question is asked as to whether the whole clip was moved. If this question is answered in the affirmative then the following changes are made to the time transformation tables: the shot start and end times are changed only in the forwards table 701, while the offset values are changed in both.
If the question asked at step 1804 is answered in the negative, to the effect that the whole clip was not moved, then one boundary only was moved. A further question is thus asked at step 1806 as to whether scaling is turned on. Again, this is a function which can be accessed via menu 910. If scaling is on then shortening or stretching a shot in Edit results in a speeding up or slowing down of the playback respectively, while if scaling is off a change in length of the shot results in a decrease or increase of the amount of animation being seen. Thus if this question is answered in the affirmative, to the effect that scaling is on, the following changes are made in the transformation tables at step 1807: the shot start and end times are changed in the forward table 701 only and the offset and scale values are changed in both tables. If however, scaling is off and the question asked at step 1806 is answered in the negative, then at step 1808 the shot start and end times are changes in the backwards table 702 only and the offset values are changed in both.
Step 1809 follows any of steps 1803, 1805, 1807 and 1808, where records immediately before or after the affected records in the time transformation tables may be altered, according to user controlled settings. These settings control what happens to a shot when the user moves its neighbour, for example preventing gaps, preventing overlaps, creating blends and so on.
The effects of the changes made in step 1604 will be illustrated in FIGS. 20 to 23.
It can also be seen that the there is a gap between shots 623 and 624, which would lead to a looping of the beginning of shot 623 at this point if the animation were played in Edit mode, whereas if the animation were played in Action mode then the animation between dotted lines 2001 and 2002, which would be missed out in Edit mode, would be played.
Finally, the user turns both time discontinuity and scaling on and moves the trailing edge of shot 624. This causes playback of the animation to be slowed down during this shot, with the animation taking place between 11 and 16 seconds in Action being played between 12 and 20 seconds in Edit.
The addition of the audio now completes the example shown in
If the animation is played using the Action timeline then the resulting image data is that which would be produced were the Edit interface and timeline not to exist. However, during playback the Edit time marker 619 is moved to indicate the correspondence between the two timelines. This is shown in
At step 2501 the 3D data is evaluated using the current Action time and the image data thus produced is displayed in viewer 902. This evaluation step is substantially identical to step 1103 described in
At step 2504 the current Action time is displayed by placing time marker 618 at the appropriate time on Action timeline 601, while the Edit time is similarly displayed using Edit time marker 619 on Edit timeline 602. At step 2504 a question is asked as to whether the stop button in the transport controls 907 has been pressed and if this question is answered in the negative then a further question is asked as to whether the end of the Action timeline has been reached. If either of these questions is answered in the affirmative then playback is ceased, while if the question asked at step 2506 is answered in the negative then control is returned to step 2501 and the next evaluation of the 3D data is made.
By updating the current Action time as described with reference to
At step 2602 the first of these shots is selected and at step 2603 a question is asked as to whether the shot is a proxy shot. If this question is answered in the affirmative then at step 2604 the frame of the proxy at the current Edit time is obtained, following which control is directed to step 2607.
If the question is answered in the negative then at step 2605 the camera and background associated with that shot are identified, and also the value of the current Action time that is associated with the shot. At step 2606 this Action time is used to evaluate the animation data, with the camera and background identified at step 2605 overriding any camera and background selection contained in the Action interface. This is carried out in much the same way as step 1103 detailed in
At step 2607 a question is asked as to whether another shot was identified at step 2601, and if this question is answered in the affirmative then control is returned to step 2602 and the next shot is selected. If it is answered in the negative then at step 2608 the images obtained at steps 2604 and 2606 are blended and displayed. If there was only one shot identified at step 2601 then the image data is displayed without blending.
At step 2609 the current Edit time is incremented by the amount of time it took to perform steps 2601 to 2608, and at 2610 new current Action times are determined. At step 2611 the current Edit and Action times are displayed in their respective timelines and at step 2612 a question is asked as to whether the stop button has been pressed. If this question is answered in the negative then a further question is asked at step 2613 as to whether the end of the Edit timeline has been reached, and if this question is answered in the negative control is returned to step 2601 to produce the next image. If either of these questions is answered in the affirmative then playback ceases.
These examples show that the ability to export both image data and animation data is important. Thus at step 2801 a question is asked as to whether the animation is to be exported as rendered image data. If this question is answered in the affirmative then at step 2802 a further question is asked as to whether this rendering should be performed using the Edit timeline. If this question is answered in the affirmative then at step 2803 the 3D data is evaluated using the Edit timeline and stored. This evaluation is substantially similar to that that takes place at step 2403. The main differences in this embodiment are that, as previously described, the rendering produces a number of equally spaced frames, the interval between frames being dependent upon the number of frames per second that are required, and also that the image data is not output to display means 202 but is stored on hard disk drive 306. From here it may be written onto a CD-ROM or sent via network means 207 to a third party. Alternatively, the data may be rendered directly to some other external storage means.
If the question asked at step 2802 is answered in the negative, to the effect that the render is to take place from the Action timeline, then this is carried out at step 2804. Again, this is in substantially the same manner as at step 2402 except that a specified number of equally spaced frames are produced and the data is stored rather than displayed.
Following either of steps 2803 or 2804, an Edit Decision List may be produced from the shots in Edit timeline 602. This is particularly relevant if the data was rendered using the animation timeline, in which case the EDL indicates the editing which the animator has decided upon, but does not limit a later editor to using this editing, as is the case if the animation is output using Edit timeline 602.
If the question asked at step 2801 is answered in the negative, to the effect that the animation data is to be exported as 3D data, and not as rendered image data, then at step 2806 the existing 3D data 506 undergoes a process known as “unwrapping” in which the 3D data is altered such that the playback is identical whether it is played in Edit or Action mode. This means that the editing performed by the user becomes permanent. The 3D data may also be simplified. When this process is complete the animation data is stored in hard disk drive 306 ready for export in any appropriate manner. The exported 3D data thus produced can be rendered by any computer system or games console fitted with a graphics card capable of rendering animation data.
Following step 2902, or if the question asked at step 2901 is answered in the negative, at step 2903 new animation curves are produced and at step 2904 a new camera channel is created, while at step 2905 animation channels for any audio or video data in the Edit timeline are created in the Action timeline. At step 2906 the time transformation tables are reset by changing all the offset values to 0 and all the scale values to 1 in the forward transformation table 701, and making the backward transformation table 702 identical to the forward table. At step 2907 the 3D data produced by steps 2901 to 2906 is exported to storage.
Thus, at the end of step 2806, 3D data, has been produced that produces substantially identical image data whether played in the Edit timeline or Action timeline. This means that once an animator has edited 3D data to his satisfaction using the Edit timeline, he can produce animation data that will have the same effect when rendered using a standard graphics card, or indeed any animation application, without the need for the Edit timeline.
At step 3007 a question is asked as to whether there is another keyframe in the channel and if this question is answered in the affirmative control is returned to step 3003 and the next keyframe is selected. Alternatively, if the question is answered in the negative then all keyframes in the channel have been considered and thus all the old animation curves in the channel are deleted at step 3008. At step 3009 a further question is asked as to whether there is another channel to be considered, and if this question is answered in the affirmative control is returned to step 3001 and the nest channel is selected. Alternatively, step 2902 is concluded.
Thus, following the simplification of the animation channels, each animation channel contains a single curve which takes account of any constraints, blending of curves, or any other type of special feature which an animation application could apply, which were applied to the previous animation curves in the channel. This reduces the amount of storage space required by the 3D data, and also ensures that the 3D data can be rendered to produce image data using even a basic graphics card.
As previously discussed, this simplification process could be used on its own, and need not be part of the unwrapping process. However, once it is performed it is more difficult to modify the animation and so it is usually carried out at the end of a project. For example, if the animator were handing over a finished scene in a film to another animator in order for him to integrate it with his scene, then he might simplify the animation first. Equally, if the 3D data is to be exported to an application that understands constraints, animation blends and other aspects of the 3D data but does not use an Edit timeline then the animation could be unwrapped without being simplified.
At step 3106 a question is asked as to whether there is another shot in the shot channel and if this question is answered in the affirmative control is returned to step 3103 and the next shot is selected. If this question is answered in the negative then at 3107 the curve selected at step 3101 is deleted, leaving only the new curve that was created at step 3102 and populated during repeated iterations of steps 3103 and 3105.
A further question is then asked at step 3108 as to whether there is another animation curve in the 3D data. If this question is answered in the affirmative then control is returned to step 3101 and the next animation curve is altered. If it is answered in the negative then all the curves have been considered and step 2903 is concluded.
At step 3203 the first keyframe in the animation curve selected at step 3101 is selected and at step 3204 a question is asked as to whether the time of this keyframe is greater than or equal to time T1. If this question is answered in the negative then the selected keyframe occurs before the time of the selected shot control is directed to step 3208 to ask whether there is another keyframe in the curve. If it is answered in the affirmative then a further question is asked at step 3205 as to whether the time is less than or equal to time T2. If this question is answered in the negative then the keyframe occurs after the time of the selected shot; since the keyframes are considered in order this means that all further keyframes will also occur outside the shot and so step 3104 is concluded.
If the question is answered in the affirmative, however, then the keyframe occurs within the shot and so at step 3206 the time of the keyframe in the Edit timeline is evaluated using the scale and offset values of the record selected at step 3202. This gives the time at which the keyframe occurs if the animation is being played in the Edit timeline. At step 3207 a new keyframe is created in the new curve that has the same properties as the keyframe selected at step 3203 but is at the time evaluated at step 3206. (The skilled reader will here appreciate that the actual properties of a keyframe may change if it is moved in time; this is dependent upon the type of keyframe. The new keyframe is one that, when moved to the evaluated time, gives the same value for the animation as the keyframe selected.)
At step 3208 a question is asked as to whether there is another keyframe in the curve, and if this question is answered in the affirmative control is returned to step 3203 and the next keyframe is selected. If it is answered in the negative then step 3104 is concluded. This is also the case if the question asked at step 3205 is answered in the negative (since keyframes are considered in sequence).
At the end of step 3104 all the keyframes occurring within the selected shot have been copied to the new curve such that they occur in the Action timeline at the same time at which they would occur if played in the Edit timeline.
Similarly, at step 3304 a question is asked as to whether a keyframe already exists in the new curve at the end time of the shot, with an answer in the affirmative leading to the completion of step 3105. Alternatively, an answer in the negative means that the value of the selected animation curve at time T2 is determined at step 3305 and a keyframe given this value is created in the new curve at the end time of the shot at step 3306.
At step 3408 a question is asked as to whether there is another shot in shot channel 616 and if this question is answered in the affirmative control is returned to step 3403 and the next shot is selected. If it is answered in the negative then at step 3409 all camera objects except this new one are deleted.
Thus at the end of step 2904 a single camera has been created that is animated in order to jump between the positions of the camera objects it replaced, according to which cameras were associated with each of the shots in shot channel 616.
The separate animation curves in channels 603 and 604 are shown. (Although the simplification process will create a single animation curve from these, the individual blocks are still shown at 3503 to facilitate understanding.) Curves 607, 608, 609, 610 and 611 fall within shots which have a one-to-one correlation between the Edit and Action timelines. However, a part of block 612 falls within two shots, and so this block has been split up into two sections 3504 and 3505 of different hatching, with the overlapping area indicating the area of repetition where animation will be copied. Similarly, block 613 is split into two sections 3506 and 3507. Additionally, curve 614 in animation channel 604 is split into three sections. Section 3509 entirely overlaps section 3508, and section 3510 is unhatched to indicate that it does not correspond to any shot in Edit and thus the animation data contained within would not be rendered in Edit mode. Note that sections 3504 to 3510 do not represent any kind of splitting of the animation curve but are merely for illustration purposes.
Animation data 3503 shows the unwrapped data. Sections 3504 and 3505 no longer overlap; instead the latter immediately follows the former. Section 3506 has also been moved along to the right, while section 3507 has been moved even further so that it does not overlap section 3506. It has also been stretched. Section 3508 has been moved, while section 3509 has also been moved so that it does not overlap section 3508 and has also been stretched. Section 3510 has been removed. A new camera channel 3511 has been added.
Thus it can be seen that the unwrapped animation data 3503 corresponds directly to the animation data in 3502 when played according to shot channel 616. It is therefore possible to export animation data (excluding any information regarding the Edit timeline), and when rendered it will be identical to the final version of the animation when played in Edit mode before unwrapping.
The unwrapping process herein described overwrites the previous animation data. However, the skilled reader will appreciate that the embodiment could be varied to allow the unwrapped data to be copied to a new project, rather than erasing the animation data in the project being modified.
Number | Date | Country | |
---|---|---|---|
60591500 | Jul 2004 | US |