The present invention relates generally to film-making and in particular to a (collaborative) pre-production tool.
This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present invention that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present invention. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
Up until recently, film-making was the area where film studios or other kinds of production companies essentially handled the major, if not the whole, process from idea to release. A studio could for example buy the rights to a script (or a story, perhaps from a book), rework the script, plan the production (pre-production), shoot the film, take it through post-production and then distribute it.
Among these steps, pre-production is very important since it, broadly speaking, breaks the script down into smaller elements (shots), defines how the shots are to be made (live shooting, pure CGI, mix of both) and composition of the shots, but also multiple requirements such as shooting location, accessories, crew and material. A production schedule defines in detail the resources needed for each scene. The resources may be any kind of resource from a vast list comprising for example actors, cameramen, grips, foley artists, hairdressers, animal trainers, catering, stuntmen, set security and permits (e.g. to be able to close off a street for shooting).
During the major part of the history of film-making, pre-production has been performed by the film studio, that perhaps outsourced specific parts of the process, all the while under the supervision of the producer who among other things is in charge of making sure that the budget is respected. Usually, the producer imposes some decisions; a deal may for example by done with a country or a city that wishes to be featured in the film and in return offers subsidies of various kinds.
It will be appreciated that the studios have the necessary expertise to handle the pre-production and that they have internal methods to respect. However, an interesting trend, often named collaborative film-making, has emerged over the last years. It involves often physically distant participants to contribute to making a movie via the Internet. The collaboration can cover several aspects of traditional filmmaking: funding by bringing in at least part of the budget, participation in script writing, proposal of shooting locations, voting during actor casting, or even post-production tasks like audio dubbing or subtitling in a specific language.
As collaborative film-making becomes more wide-spread, there will be a greater demand for tools that allow and support collaborative pre-production. For one thing, a small, independent production is likely to lack the expertise of a studio and, for another, a collaborative effort may bring in people from all over the globe in an ad hoc team. It goes without saying that it is desired to have these people work together in an efficient manner.
Some multiuser tools exist—5th Kind, Scenios, Lightspeed EPS, AFrame, Celtix—but they only partially cover the needs for collaborative film-making. Even though they do use the terminology and organisation typical in the film industry, most of them are mainly to be seen as tools for storing and sharing different files.
It is well known that during the filmmaking process, more assets (shots etc.) are produced than what is used in the final release (or extended cuts) of the movie. As a consequence, for one produced movie, typically more than 50 hours of the generated video is never used. Some of these shots are of course highly specific for the movie, but plenty of shots are more generic and could be reused in another movie. This is particularly true for the so-called “establishing” shots that are inserted to provide some context. Typical examples are a flight over a city or a shot of the main hall of Grand Central Station to situate geographically the location where the action takes place. Reusing such assets may be a very cost-efficient solution when other films are made.
In addition, with the continuous progress in computation power and particularly graphics processing units, more and more computer generated imagery (CGI) techniques are used in filmmaking in different ways: insertion of virtual elements in live shooting, addition of visual effects (fog, fire, etc), compositing of live shooting on greenscreen background with CGI generated sequences or other shooting. However, not all directors, especially beginners, are not familiar or comfortable with these techniques.
It will thus be appreciated that there is a need for a solution that can provide a different tool for efficient collaborative pre-production that facilitates the production by recommending existing assets to be re-used in the movie and by proposing different production alternatives with cost and delay estimations. The present invention provides such a solution.
In a first aspect, the invention is directed to a device for pre-production of film comprising a pre-visualization tool. The device is configured to obtain a number of scenes of the film; retrieve a prioritized list of ways to render the scenes, each way corresponding to a type of asset, the list detailing the types of assets that are to be retrieved for rendering in favour of other assets, each asset being a representation of at least one scene; retrieve, for each scene at least one asset representing the scene, the at least one asset representing the scene comprising an asset of the type of asset that corresponds to the highest prioritized way among available assets for the scene; and use the retrieved assets for the scenes to render a pre-visualization for the film.
In a first embodiment, the pre-visualization is rendered as a timeline that marks the length of each scene.
In a second embodiment, the device is further configured to divide a script into scenes. It is advantageous that the device is further configured to estimate the length of a scene by application of a rule to the script for the scene;
the rule can multiply a number of pages in the script of the scene by a predetermined time and the rule can be applied differently to dialog and to description.
In a third embodiment, the types of assets comprise a script for the scene, a breakdown of shots for the scene, automated text-to-speech of dialog, automated scrolling of the dialogues, and graphical representation of characters and locations for the scene.
In a fourth embodiment, the device is further configured to retrieve direction choices for the scenes and use the direction choices when rendering the pre-visualization.
In a fifth embodiment, the device is further configured to, for at least one scene, combine a retrieved asset with an asset of a type of assets with lower priority.
Preferred features of the present invention will now be described, by way of non-limiting example, with reference to the accompanying drawings, in which
The present invention will be described using an example involving four parties—a writer, a director, a producer and a Computer-Generated Imagery (CGI) artist—collaborating using a pre-production tool. It should however be understood that this is just an example and that the present invention can extend to more parties.
For the purposes of the present invention the first input to the pre-production tool is the script, written by the writer. During pre-production, the script may be changed, for example by removing or reordering scenes, amending dialogs or changing the setting of one or more scenes.
As is well known, a script is usually written in a standard format as a sequence of scenes. Each scene has a heading that sets the location and a scene number, after which follows a description of what happens in the scene and any dialog. An example would be:
INT. FLORA'S KITCHEN—MORNING 117
Mum, do we have any bangers?
During pre-production, the script is broken down, which not only means taking decisions about how the scene will be made—for example, on location, in a studio or using chroma key compositing—but also communicating and documenting the decisions. The present invention provides the possibility to produce project related information digitally using the tool that advantageously is implemented online and to which access may be had through a standard web browser to enable remote use of the tool.
Preferably, the tool is not only available to the parties that participate actively in the pre-production (writer, director, producer, CGI artist) but also to other participants in the project (actors, Visual Effects (VFX) specialists, etc.) since this can allow everyone to share the director's vision of the movie. It is also preferred that only the active parties can input or modify data, and that each party's tool is adapted to the needs of the party; the writer does not have the same needs as the producer or the CGI artist.
Through the interface 150, each user can access the projects in which they are involved. The possible actions depend on the role of the party in the project; a party may have different roles in different projects and it is also possible that the director or the producer limits a party's access beyond the standard access provided by the tool. For example, members of a rating agency may be allowed to preview (the present version of) the movie to give a rating evaluation but they should not be allowed to modify anything.
The modules of the project server provide the main functionality of the tool 100 as follows:
The project management module 161 provides the framework of the tool such as account handling, logging on by users, messages handling (incoming, sending messages, archiving), presentation of task lists, etc.
The data access module 162 enables users, provided that they have the necessary access rights, to view data for the project. Depending on the role, a party may have access to all of the data or a subset thereof, for example limited to one scene of the script and to information relating to the tasks of the party.
The asset recommendation module 163 is configured to analyze the script for key words, usually for a specific scene, in order to recommend assets. An asset may be film scenes that have been shot previously but that were never used in a film, but can also be of other kinds such as audio, photos, 3D models. If, for example, the script states that the scene takes place close to the Eiffel Tower, then the asset recommendation module 163 is configured to search the asset database 180 for assets that are tagged “Eiffel Tower”. Further key words may be used to narrow the search, for example “night”, “winter”, “rain” and “scary”. The director or the producer may then chose an asset for the scene in question. The recommendation module preferably also takes into account contextual parameters like the ones provided in the script scene title where the location and the moment of the day are provided. When this title specifies that the scene is in PARIS and at NIGHT, the recommendation module will not propose assets related to the Eiffel Tower in Las Vegas or China, nor will it propose elements that are not nocturnal.
The direction assistant module 164 can be said to be an expert system that analyses the script to come up with suggestions for the direction of the scenes. For example, for exemplary script scene 117, the module easily deduces that it is an interior scene and that there are two characters, Flora and Sebastian. It is clear that no external shooting is needed with what that entails in the way of permits, security and so on. One first direction possibility is to perform the shot in pure live shooting. For this the location, i.e. the kitchen, needs to be built (in particular if more scenes in the script take place there), a rough estimate for the cost and delay (i.e. required preparation time) may be obtained from a database. Another option would be to shoot the actors on a green-screen and composite this shooting with a CGI rendered version of the kitchen, previously modeled in 3D using dedicated tools. Here again, a cost and delay estimation may be provided for the option. Please note that here again, reusing an existing asset (e.g. a 3D model of a kitchen) might be an efficient solution. Further, still using the database, “standard” direction options may be suggested, such as filming using a team with one camera using a number of different angles (Flora coming into the kitchen, close-ups of each person for the lines . . . ) and adding a camera to the team in order to shoot the scene in one go. In order to keep the estimates up-to-date, it is preferred to have the direction assistant module 164 communicate with an external estimate database.
The pre-visualization module 165 is configured to display the “embryo” of the film in a best-effort attempt in one of various possible ways. The module may thus show a timeline that marks the length of each scene with any available data indicated for each scene. Such data may be the script for the scene, if that is all that is available, but it may also be a representative still of an asset for the scene or a breakdown of the different shots that the director has planned, e.g. “5 second wide shot that pans as Flora enters the kitchen and reveals Sebastian; 3 second close up on Sebastian asking for bangers . . . ” Different options are possible for the pre-visualization: the length of each scene may be estimated using for example the rule of thumb that one page corresponds to one minute of film or a rule that modifies the rule of thumb by taking into account the amount of dialog and the amount of description. The module may also display data as a ‘film’ in its rudimentary form, showing assets that have been chosen, pictures of actors hired for the parts, rendering the dialog using automated text-to-speech and so on. The pre-visualization tool has, preferably modifiable, settings that define the preferred order or best-effort ‘order’, i.e. a list of asset types with decreasing (or increasing) priority. This makes it possible for the tool to, for example, first see if a video is available for the scene, then, if no shot is available, if a breakdown into shots has been defined, then if a still of an asset is available and finally, as a last possibility, automated text-to-speech or automated scrolling of the dialogues (at the speed of speech or not) to give an idea of the length. Other possibilities comprise storyboard images, rushes, processed rushes. It is also possible to render a combination of different assets, for example using a still together with automated text-to-speech or a possibly moving 3D model superimposed on a still.
As already described, the example involves four users. The first user is the writer 110 whose main task is to provide the script. The second user is the director 130 who usually is the most active party, performing most of the operations and working with the script to define different shots, selecting assets to be re-used and taking direction decisions. The third user is the producer 120 who mainly interacts with the director 130 to discuss decisions and to make changes. The fourth user is a CGI artist 140 whose role is to work on specific production tasks.
1. The director 130 logs on 202 to the tool 100 through the web browser 150 on a laptop, obtains relevant user information 204, visualizes a task list 205 and messages 203. The director selects project “MY_FIRST_HORROR_MOVIE” 206, browses the script 208. The script has been previously processed by identifying keywords and associated categories. For example “Eiffel tower” is identified as a keyword and associated to a “location” category. The director decides to work on scene n° 42, 209 (but could also have worked with characters 211, locations 213 or key words 215 or to display a list of these). The director looks for assets 210 for this scene by performing asset searches 212 related to the keywords of the scene. This can be done manually: the director selects a keyword and launches an asset search related to this keyword. It can also be done automatically for some or all the keywords of the script. In this case multiple asset searches are launched and their results are displayed when needed. The director selects a set of assets and may display the asset information 217 (e.g. format, quality, duration, price, etc.) related to the selected asset. The director then moves back to the direction phase and uses the direction assistant 214 to make direction choices to define the use of the selected assets.
2. The producer logs in 202, selects the project 206, possibly selects his role 201 (“producer”) in the case he has multiple roles on this project, and uses the pre-visualization tool 218 to see the progress, but does not agree with the choices made for scene n° 17 as it is cheaper to use a video or CGI background rather than the more expensive live shooting planned by the director. The producer then uses the communication tool 207 to communicate with director (using chat, videoconference, phone call, email . . . ). They browse through the assets 210 together to find a possible solution, but as no asset fits their needs they decide to use a new CGI image that should be created especially for this background. The producer modifies 208 the scene accordingly, requesting 216 the creation of the new asset (i.e. the CGI image) and may help in the creation thereof by for example providing a descriptive text about the asset as well as examples in the form of pictures or video. The director finally verifies that the task for the CGI image was created in the task list and updates the production workflow 220 by assigning the 3D modeling task to a team member with the appropriate availability and skill, to with the CGI artist.
3. The director receives a notification 203 that scene n° 17 has been modified and opens the direction page 214 for the scene n° 17 directly from the notification to see the modification done by the producer.
4. The CGI artist, possibly after having received an email, logs in 202 and visualizes his task list 205 and messages 203 and there is indeed a new task: creation of the CGI image for scene n° 17. The CGI artist launches the task of background modeling (possibly using a preferred tool from which the asset can be uploaded to the tool) for the scene, models the asset and, when completed, signals the task as done.
5. The director then receives a notification 203 that this production task has been completed and awaits validation. From the notification, the director opens the created asset 210 and validates it. The task state and asset become approved, and a notification 203 is sent to the CGI artist.
As can be seen, a key element of the present invention is the aggregation of all the data related to the film making project, allowing all participants to have easy access to the information needed to perform their respective tasks. In addition, the asset recommendation tool and the direction assistant can aid the director and the producer to make direction and budget choices. In particular, the director can be able to make the film faster and cheaper owing to the reuse of assets and the direction assistant can propose alternatives direction choices, so that more focus can be put on the most important scenes and that in addition can prove useful for beginners. Through the tool, the director can define the vision for each scene, share this with the producer and the parties in charge of making the scenes, and have a rough preview of the movie project at any stage. The producer is able to control the progress continuously and is also able to encourage the director to maximize the reuse of assets to reduce the cost and to enable an earlier release date. All participants in the project benefit from the tool by having a better knowledge of the project and what they are expected to do. This could allow producer to work with less experienced—and thus cheaper and more available—directors that are assisted by the proposed tool.
A further advantage is that the tool could lead to the emergence of a marketplace for freelance, remote workers since the tool enables easy access to all the information needed to perform their job.
Different parts of the functionality illustrated in
Log in 202: A user connects to a portal through a web browser, enters login and password to access the tool.
User information 204: Displays information concerning the user, such as: name and pseudonym, contact information (phone numbers, Skype alias, email . . . ), photo, a list of selected, pre-defined skills (e.g. “CGI rendering”) and availability information. This information is both intended for the user in question, for directors and producers, and for the tool that can propose a list of available resources for a given task. It can also be a means for the user to advertise its skills.
Roles 201: Once a project is chosen, the user can visualize and select its roles in the selected project (or the other way around: select first select the role and then the project). Different roles have different privileges, e.g. the ability to modify the roles of other users in the project. The roles comprise “Writer”, “Director”, “Producer” “CGI artist”, “Actor” and many others.
Messages 203: The user may access a list of messages, visualize messages, write messages, reply to incoming messages, and delete messages. Messages can for example be related to project assets or to tasks.
Tasks 205: The user can access a list of assigned tasks. For each task, the user may decline or accept the task, interact with the project manager, or signal the task as being done. For at least some tasks, the tool can provide the means to perform the task, such as a CGI tool, but it will be appreciated that many parties will prefer to use the tools to which they are accustomed.
Project choice 206: The different projects in which the user (or the user's chosen role) is participating are listed. The user can select one of these projects. For each project, the following elements are preferably displayed: Project Name, Project logo or Picture, Name of the project owner, Description area, Role(s) of the user in this project and Default parameters (e.g. default direction choices). Before a project has been selected, any other information (except the user information) is preferably not accessible.
Project control 216: This is mainly project administration. A user with an appropriate role can control settings of the project, for example by editing the project information (name, etc.) and by adding users with their role(s) within this project. This newly added users receive a notification of this. Ordinary users have less control. They are preferably only able to choose the level of notification (regarding any modification of the project, only assigned tasks, only elements worked on . . . ).
Script browsing 218: The user can browse through the script in different ways, such as:
It will be understood that variants and extensions of the tool described are possible. For example, the director may select an asset that needs to be “tuned” as it includes an undesired element, such as a modern car in a landscape shot that is intended for a costume drama. The director can then create a new task for digitally removing the car from the asset, and assign the task to a suitable project member, much as the director did assigning a task to the CGI artist in the exemplary use case.
In addition, it has already been briefly described how assets are tagged using key words. A production company that has finished a project may tag unused assets it created but did not use and upload them to the asset database. Additional parameters can be extracted from these assets—e.g. time of day, direction of lighting and camera movements—and added to the asset metadata.
It is further possible for a production company to create assets intended directly for the asset database. Such creation may for example be done using a multi-camera rig that allows simultaneous recording of different viewing angles and the resulting video can later be used to generate video corresponding to other viewing angles than the ones that were shot.
Asset search parameters. The following list shows exemplary search types, with some exemplary values, terms that may be used in asset searches:
type of asset: video, image, sound, 3D object, animation motion capture, VFX, filter
quality (depending on type of asset):
compositing purpose:
camera parameters:
format
duration
ambiance/mood
price
lighting
colors
texture
Direction choices. The following list shows exemplary direction choices for the scenes/shots:
video
audio
Necessary postproduction tasks:
It will be appreciated that the tool is best implemented using the required hardware and software components, such as processors, memory, user interfaces, communication interfaces and so on. How this is done is well within the capabilities of the skilled person. As an example, the users' browsers are advantageously implemented on the users' existing computers or tablets, while the databases can be implemented on any suitable prior art database and the server on any suitable prior art server.
The skilled person will appreciate that the present invention can provide a tool for efficient collaborative pre-production.
Each feature disclosed in the description and (where appropriate) the claims and drawings may be provided independently or in any appropriate combination. Features described as being implemented in hardware may also be implemented in software, and vice versa. Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
12306581.5 | Dec 2012 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2013/075919 | 12/9/2013 | WO | 00 |