Screenwriting is often a collaborative effort in the movie industry. In order to make the most efficient use of expertise and the best use of time, a storyline is typically shared among a group of writers. Each writer separately creates a part of the story assigned to them with the hopes that it will be cohesive once the parts are recombined. However, problems arise when the writers become isolated either because of distance or because of lack of communication. Facts often become distorted or changed as the storyline sections begin to drift from their original intent—which happens often as creative efforts flow into different directions. This can be circumvented by frequent reviews of the individual works by a fact reviewer or editor in charge of maintaining consistency.
But, as the storyline grows in size and complexity, this becomes more and more time consuming and can interrupt the ongoing efforts of the individual writers as they submit their sections for review and wait for feedback. In addition, as the complexity of the storyline increases, so does the complexity of checking for consistencies and interpreting corrections. Individual writers also may not have a feel for how consistent they are compared to the other writers. The editor may not be able to step back and tell if the inconsistencies are mainly in the characters, the storyline, the titles and/or the settings of the different sections, especially in large projects. This reduces the overall managing effect because there is a lack of trend information that may help in eliminating consistency errors before they occur. Many products are available to assist with screenwriting such as storyboarding or outlining of material, but they lack the ability to automatically and easily relay consistency information.
Consistency graphing techniques assist story writers using screenwriting tools to generate multiple consistent story documents. The techniques easily relay comparison information to facilitate with the creation step of a story, resulting into a movie screenplay (e.g., movie script), a book, a stage play, a game scenario and/or any other forms of a story. It allows better sharing and communication of the results to enhance the end product, both in quality and efficiency (e.g., time, cost, etc.). The techniques can be used to illustrate comparisons between a number of documents, through a textual analysis and a dedicated graphical representation and is easily adaptable/scalable to any number of documents or elements. The techniques are also applicable to other forms besides textual documents. They can be applied, for example, to musical compositions and the like as well.
The above presents a simplified summary of the subject matter in order to provide a basic understanding of some aspects of subject matter embodiments. This summary is not an extensive overview of the subject matter. It is not intended to identify key/critical elements of the embodiments or to delineate the scope of the subject matter. Its sole purpose is to present some concepts of the subject matter in a simplified form as a prelude to the more detailed description that is presented later.
To the accomplishment of the foregoing and related ends, certain illustrative aspects of embodiments are described herein in connection with the following description and the annexed drawings. These aspects are indicative, however, of but a few of the various ways in which the principles of the subject matter can be employed, and the subject matter is intended to include all such aspects and their equivalents. Other advantages and novel features of the subject matter can become apparent from the following detailed description when considered in conjunction with the drawings.
The subject matter is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the subject matter. It can be evident, however, that subject matter embodiments can be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the embodiments.
Previously, when editing multiple documents related to the same story, there was no means to ensure that the story remained consistent. This is the case for a single editor but is even more critical when the documents are edited collaboratively, by multiple different authors. The present techniques solve these problems by providing writers indicators that inform them if the document is consistent with the other documents of the story, allowing early detection of inconsistencies between redactions of different authors and indicating which document(s) must be updated to recover consistency. Documents concerning the same story creation project are collected, analyzed and returned to the authors with indicators showing a level of consistency. This allows the authors to quickly identify the parts of the story that are tagged as inconsistent and to rework them. The consistency analysis results are given by a graphic that shows the different consistency problems regardless of the number of documents taken into account. The techniques described herein are also applicable to other forms or elements besides textual documents and storylines. For illustrative purposes, story writing and consistency information are used in the examples that follow.
The general objective of screenwriting tools is to help a writer to generate the script, a document (generally around 150 pages) that details the actions and dialogs of the different characters in each scene. The writer has to define the main concepts of the story (e.g., locations, characters, etc.) as well as the interactions between them. It is a lengthy creative task that requires many iterations. In the past, everything was done on paper in a manual, tedious fashion. Today, screenwriting tools can assist in the digital domain and are helping the writers in this difficult task. One way to help make documents consistent is the successive creation of multiple texts about the story, at different levels of details and different lengths. For the sake of homogeneity, these texts are hereafter called “documents,” even the smallest ones. The objectives of these documents are similar: defining what the story is about by describing the main characters, the setting, the events, the tension between the characters, etc., depending on the length of the document itself. Nothing is standardized and many variations in number of documents, size and content of those documents are possible.
Below is a general example of a storyline that illustrates different documents associated with that storyline (and illustrates storyline terminology):
As can be seen by the above examples, as the documents become longer and longer, it becomes harder and more time consuming to verify consistency between the documents. And even harder still to determine trend information from the inconsistencies to help prevent future problems from occurring. To solve these issues, the following techniques are employed to generate a graphical representation of the inconsistencies between the documents. The representations easily communicate where the inconsistencies exist and also to what level (e.g., severity and/or number, etc.).
As shown in
In most cases, documents are not available within a single tool but present in different locations. It is therefore necessary to gather them to check their consistency. In some cases, the documents are available directly through a common management tool and, therefore, collecting the documents is no longer required and the consistency check can be computed in real time. Real-time consistency checking allows errors to be quickly corrected before they propagate throughout a document, saving time and aggravation.
The techniques for determining consistency can vary and is not critical to the graphical representation of the consistency check. The result of the determination is typically a floating value between 0.0 and 1.0, indicating no consistency and perfect consistency of a storyline element, respectively. In the case where a determination is not possible between two elements of a document, the result of the consistency can be set to a special value, for example, −1 (to indicate that a determination cannot be made).
A simple example of a technique is to determine the presence of words in each document and compare. With two texts, take the size of the two texts, after determining the number of common words in these two texts, and divide it by the size of the smallest text size. This division aims to give more importance to the words of the smallest text.
This technique is very simple but the results are very good when at least one text is small. The main problem with this formula is due to the direct uses of the words present in the text. Typically, a word in singular in one text and in plural in the other text will not be taken into the determination. To counter this effect, it is possible to use part-of-speech tagging (POST). This method associates the normal form of the word to a simplified form and categorizes them as nouns, verbs, adjectives, adverbs, etc.
It allows matching of verbs that were not conjugated in the same tenses, matching of names one time in plural and one time in singular, matching of adjectives with different genres, etc. This formula does not take into account the categorization of words produced by POST algorithms. Indeed, all categories of words are not necessarily very relevant for this type of computation. Conventionally, in a text, it is possible to limit, without much loss of relevance, to nouns, verbs and adjectives.
The limitation to these three main categories allows good results for longer and more constructed texts. But, some words (such as ‘have’ and ‘be’) have too much importance. To avoid this, it is possible to eliminate the most common words in each of these categories. It can, for example, be limited to the 25 most frequent words, etc.
All of these types of formulas work regardless of the type of text and language. In some specific cases, it can be advantageous to determine new formulas that take into account the specificities of the texts, such as language or technique.
The tables below illustrate using the above techniques on an example movie storyline “Jurassic Park.” In this example, information about the treatment is not available, therefore, the principle is not applicable to those elements. They are identified as N/A in the tables below.
100%
100%
100%
100%
100%
45%
100%
100%
The consistency determination might not be symmetrical, and can depend on the references chosen. As a result, performing the checks, for example, on a set of five documents can lead to 5×2×5 determinations and therefore 50 consistency values between the five texts, T1 to T5, as illustrated in
Here the problem is to determine a generic graphical representation that will allow the user to quickly know where the consistency issues are located and to what extent. To do this, a representation is based on the use of polygons. For ease of understanding, aesthetic and computational reasons, it is restricted to regular convex polygons. The number of vertex of the polygon is determined by the number of compared texts or documents (or other elements, etc.). For ease of explanation of an example scenario, the document number is limited to five. However, the techniques utilized are applicable to any number of documents/elements.
The determination is accomplished in two stages: one draws the placement figure (e.g., a placement polygon used to place the consistency polygons) and then for each storyline element, draws its schema of consistency (e.g., consistency polygon). In the example 300 of
As illustrated in the example 400 of
Option “A”—A portion 602 corresponding to the analyzed element can be offset as depicted in the example 600 of
Option “B”—It is possible to use grayscale instead of color. In the example 800 of
Option “C”—As shown in the example 1000 and example 1100 of
Option “D”—In the case where the number of elements is even, it is possible to make a positioning and therefore a new cutting to allow a more compact placement. In this case, an element is not associated with a vertex but with a center of a segment connecting two vertices. The starting point is 0° and not 270° and parts are defined by the center and the two vertices flanking the point of an element. This positioning is optimized for a small number of polygon. The example 1200 of
Option “E”—In the case where there are many elements, it is possible to apply a filter to see only a relationship of a particular element with all others. The examples 1400 and 1500 of
Option “F”—The examples 1600 and 1700 of
Option “G”—Example 1800 of
These different types of representations described above can be generated by a system 1900 such as that shown in
The polygon constructed by the builder 1906 based on the number of processed elements represents a placement polygon that is used to place or organize comparison polygons (e.g., consistency polygons). The comparison polygons typically fill a portion of the placement polygon associated with a particular element. These comparison polygons represent comparison information (e.g., a level of consistency) of an element between other elements. They are placed by the builder 1906 at or near a portion of the placement polygon assigned to that particular element. The comparison polygons can be placed within the placement polygon or in proximity to a vertex or segment. This allows the comparison polygons to be offset, etc. to enable better visual effects, making the element comparison information easier to comprehend. Likewise, color, grayscale, and/or three-dimensional effects can be employed.
The builder 1906 can also change its output representation 1910 based upon a user input 1912. A user can apply different filters to affect the visual representation 1910, highlighting some documents while diminishing others, etc. For example, a user can isolate one element in particular to more clearly see its comparison with other elements. A user can also select which elements to compare or not to compare. A user can also decide which colors represent what elements and/or which colors represent the comparison (e.g., a certain level of consistency—red indicates poor consistency, yellow indicates borderline consistency, green indicates good consistency, etc.). A user can also select particular color schemes and/or grayscale indicators for comparison polygons. User input can also be utilized to hide the placement polygon to clean up the output representation 1910. One skilled in the art can appreciate that other types of user input can be used to influence the representation 1910 (e.g., limiting elements, scaling larger or smaller, etc.).
In view of the exemplary systems shown and described above, methodologies that can be implemented in accordance with the embodiments will be better appreciated with reference to the flow chart of
The comparison analysis and number of elements is then utilized to determine a representation of the comparison between elements 2006. A polygon is used as a placement figure for placing further polygons that represent comparisons of an element. The number of vertices or segments is determined by the number of elements being compared. Once the placement figure is established, smaller polygons or “comparison polygons” are used to represent comparison information of an element compared to another element. It is also possible to use the placement polygons for placement of comparison polygons but not have the placement polygons visible to a user of the interface. This can facilitate in relaying comparison information to a user. Likewise, the use of different colors of comparison polygons can be used to represent different elements. This allows the user to easily relate comparison values with a specific element. Numeric information can also be shown on the representations to aid in conveying information.
The comparison polygons can also be offset from the placement polygon for effect and/or displayed within the placement polygon. Elements can be represented by the vertices or segments of the placement polygon. Filters can also be applied to the representation. For example, a user can filter the representation based on a specific element so that it is easier to interpret. The filtering can be accomplished by leaving a selected element in color and using grayscale on the other documents and/or bringing certain elements associated with the selection forward in a three dimensional effect to aid with conveying the information as well.
The systems and methods described above can be applied to other sets of data besides textual types of information and is not limited by the textual illustrations given above. For example, it can also be applied to musical compositions with the comparison determinations based on the notes and scales used.
What has been described above includes examples of the embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the embodiments, but one of ordinary skill in the art can recognize that many further combinations and permutations of the embodiments are possible. Accordingly, the subject matter is intended to embrace all such alterations, modifications and variations that fall within scope of the appended claims. Furthermore, to the extent that the term “includes” is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim.
Number | Date | Country | Kind |
---|---|---|---|
14 306 512.6 | Sep 2014 | EP | regional |