The rise of “Big Data” has generated a great deal of excitement in connection with data analytics. Given the massive amount of data present in Big Data systems and applications, there is an expectation that new insights and information will be available to data analysts that were previously unavailable (or impractical). However, in practice, computer scientists have felt some frustration because the enormous investment in Big Data has resulted in relatively few success stories. While the Big Data model has proven very good at amassing data, it is believed that there is a serious shortcoming in the art for technology solutions that facilitate how large amounts of data can be analyzed to produce meaningful information, intelligence, and insights for data consumers. In other words, Big Data methods and analyses have had difficulty finding something interesting and meaningful within the Big Data and communicating such interesting/meaningful information to a user.
For example, finding meaningful insights, trends, or information gathered through Big Data is extremely reliant on a human interpreting the data as the data are displayed. The current Big Data approaches fail to effectively communicate what the computer may have found as a result of complex Big Data analysis. Spreadsheets and graphs are useful representations of data, but only to the people who understand such graphs and spreadsheets. Therefore, despite all the powerful machines that are in use to gather and process Big Data, companies still rely on a person looking at a screen to find the meaningful or most important information gleaned from the Big Data—and that same person then communicates the meaningful or most important information to everyone else. In other words, the responsibility for telling the Big Data's “story” falls upon these so called “data scientists.” The inventors believe that there is a significant need for better technology to help people assess the meaningful information that may be present within large sets of data.
It is in view of the above problems that the present invention was developed to help improve how narratives based on data are computer-generated. In example embodiments described herein, communication goals serve as the focus from which narratives are generated. As used herein, the term “communication goal” refers to a communicative purpose for a narrative and/or a meaning or class of meanings that is intended to be conveyed by the narrative to the narrative's reader or consumer. For example, if a person wants to know how a retail store is performing, the system described herein can be configured to automatically generate a narrative that is meant to satisfy the communication goal of providing information that is responsive to the person's desire to gain knowledge about the retail store's performance. With example embodiments, narrative or communication goals are represented explicitly; relationships are established between these structures and content blocks defined to specify how to fulfill the represented communication goals. In example embodiments, relationships can also be established between any or all of these structures and the narrative analytic models that specify the nature of the data and data analyses required to fulfill the represented goals. In additional example embodiments, relationships can also be established among communication goals themselves. The relationships between communication goals, narrative analytic models, and content blocks allow a computer to determine when to use or not use those, or related, content blocks. In other words, the communication goal data structures constrain the content blocks needed to fulfill a narrative goal. Using the communication goal data structures as a guide, a computer may generate meaningful narratives by determining the content blocks and narrative analytics associated with a given communication goal data structure.
By representing communication goals, the communication goal data structures constrain the nature of the data necessary to fulfill the narrative goal and to provide a narrative that answers the questions naturally asked, whether explicitly or simply internally, by a reader. That is, only a subset of available data in a data domain are generally needed to accomplish a desired communication goal, in which case the computer can limit the processing for the automated narrative generation to such a data subset. By specifying a communication goal data structure in advance, the computer can determine the nature and amount of data it needs analyze in order to automatically generate a narrative that satisfies the communication goals. Another benefit of constraining data processing to a data subset determined in part by communication goal data structures is that the reader is not overwhelmed with irrelevant or at least comparatively less important data and analyses of those less important data. The data presented in the narrative are constrained by the communication goals it aims to fulfill, so that they are not buried in the middle of a great deal of other less important information, and hence are more easily understood.
Finally, for example embodiments, because the communication goal data structures, and their relationships to each other, to content blocks, and ultimately to narrative analytic models and relevant data types and data, are specified in advance, and because data can be analyzed after selecting a communication goal data structure, data may even be analyzed in real-time or in an on-demand manner response to an input from a user. Such a system is capable of generating a narrative interactively, rather than all at once. An interactive narrative, which responds to a user's questions or prompts, may more efficiently respond to a user's dynamic and changing needs for information and thus enable the user to focus on what matters to him or her. For example, with an example interactive embodiment, rather than requiring an owner of a retail store to read an entire narrative report summarizing a store's performance in order to find the information he or she currently needs, the narrative can be constructed interactively to address exactly those specific information needs. The computer may respond to inputs from the owner and provide information in natural language and other forms, interactively, that is responsive to the owner's inputs—without generating an entire report narrative.
Further features and advantages of the present invention, as well as the structure and operation of various embodiments of the present invention, are described in detail below with reference to the accompanying drawings.
The accompanying drawings, which are incorporated in and form a part of the specification, illustrate the embodiments of the present invention and together with the description, serve to explain the principles of the invention. In the drawings:
Referring to the accompanying drawings in which like reference numbers indicate like elements,
The processor 100 can be configured to execute one or more software programs. These software programs can take the form of a plurality of processor-executable instructions that are resident on a non-transitory computer-readable storage medium such as memory 102. For example, the one or more software programs may comprise an artificial intelligence program. The processor 100 may execute software to implement artificial intelligence that generates narrative stories. In an example embodiment, the generation of narratives by processor 100 is based on an explicit computer-readable data representation of a communication goal. Such a computer-readable data representation is referenced herein as a communication goal data structure. Examples of communication goal data structures are described below.
When reading a narrative, a reader expects the narrative to answer certain questions about the data discussed by the narrative. As such, it is strongly desirable to take into consideration the anticipated questions from the reader when driving the generation of the narrative. That is, it is desirable for the narrative generator to anticipate the questions asked by the reader and in turn answer these anticipated questions. Moreover, as the narrative progresses or unfolds, answering one question often naturally raises other questions in the mind of the target reader, in which case it would also be desirable for the narrative to address these other questions as well. In other words, there is a natural relationship between different communication goals for a narrative in that the fulfillment of one communication goal will often give rise to other communication goals, which in turn must be fulfilled by the narrative. For example, if a writer were to draft a performance report about a store, the reader may anticipate questions such as: “what is the status of the store?” “how well is the store performing?” “why is the store performing well or poorly?”, etc., and the narrative generator can be designed to provide answers to these questions.
At step 110, the processor 100 receives input regarding one or more communication goals or already is configured with one or more such goals. As an example, this specification can be an input from a user that indicates a communication goal (where the user may be an author or consumer of narratives). As another example, the processor may be pre-configured with input for a specific communication goal or set of communication goals. At step 112, the processor determines one or more content blocks for a story specification 118 based on an explicit data representation of the communication goal(s) described by the input at step 110. That is, the processor 100 accesses or creates one or more communication goal data structures in memory 102 based on the user input or processor specification and uses this communication goal data structure to determine one or more content blocks and to drive how a content block is configured for inclusion in a story specification 118. Step 112 may further comprise ordering how the content blocks will be presented to the user.
As explained below, steps 110 and 112 can be performed in either or both of an authoring mode and a user-interactive mode. In an authoring mode, a user provides input to configure a communication goal data structure to author a story specification designed to achieve a particular communication goal. Thus, in the authoring mode, the user is focused on creating and configuring an appropriate instantiation of a story specification 118 to meet a perceived communication goal need or multiple perceived communication goal needs. In a user-interactive mode, a user provides input relating to a desired communication goal for which the user wants a narrative to be generated. In the user-interactive mode, the system 104 may already include a number of communication goal and content block data structures in memory, and the system aims to leverage the communication goal relating to the input at step 110 to drive the selection of and instantiation of an appropriate story specification that is tailored to the communication goal corresponding to the user's input.
Examples of suitable models for content blocks and story specifications are described in the above-referenced and incorporated U.S. Pat. No. 8,630,844. As with the ′844 patent, the story specification 118 and the content blocks therein are specified as computer-readable data that is not in a programming language (or in a machine code) that is directly executable by a computer. Instead, the story specification 118 and content blocks therein are models that help define how narratives should be generated from a given data set.
A parsing engine 114 parses and interprets the story specification 118 to generate the actual programmatic data structures that are directly executable by the processor. These programmatic data structures serve as part of a narrative generation engine 116. The narrative generation engine 116 is configured to process input data about which the narrative is to be generated, and, based on this processing, automatically generate a narrative that is tailored to the communication goal(s) determined in step 110. Also the parsing engine 116 may supply parameters to the content block determined in step 112. These parameters may define which parameters will be used to write the story. For example, supplying the parameter to the content block may involve setting the content block to review “total unit sales” for a store. The actual data representing the total units sold are not supplied at this step, but the determination gives a value to the top-line metric parameter, and the narrative generation engine later uses this to process actual data based on this parameter. An example embodiment for the parsing engine 114 and 116 are described in greater detail in the above-referenced and incorporated U.S. Pat. No. 8,640,844.
The memory 170 may also include a content block library 160 comprising one or more content block data structures 162i. Each content block data structure 162i is a specification of data and computational components that are necessary to generate a section of a narrative. These data and computational components define a narrative analytics model for a narrative section. As explained herein, the content block data structure's specification can be a high-level configurable and parameterized specification. Through configuration of a content block data structure 162i, the content block data structure 162i can be adapted to model how a wide variety of narratives in various content verticals and domains are to be generated.
The memory 170 maps each communication goal data structure 152i to one or more associated content block data structures 162i via associations 164. In this fashion, once the system has identified a communication goal that is appropriate for a user, the system can reference one or more content block data structures that are tailored to fulfill that communication goal (and do so transparently to an end user). The associations 164 can be implemented in any of a number of ways, including but not limited to pointers, linked lists, arrays, functions, databases, files structures or any other method for associating data.
Instantiation may further include executing the resulting story specification formed by the content blocks specified by the communication goal data structures. Executing the resulting story specification results in the production of the actual narrative component specified by the content block.
Once parsed and translated into computer-executable form, the constituent components of a content block 206 delineated above provide the computational and data elements that, when executed, result in the generating of a narrative based on data.
Relative to the '844 patent, with this example embodiment, there exists a new layer for further parameterizing how content blocks can be configured—the communication goal data structure 200. The communication goal data structure 200 of the example embodiment of
Because the communication goal data structure 200 of
Steps 110 and 112 operate as described in connection with
The parsing engine 114 can build a content block collection 220 from the story specification 202. If the story specification 202 includes only a single content block, the content block collection 220 in turn can comprise a single content block. However, if the story specification 202 comprises multiple content blocks, the content block collection 220 can be an ordered listing of these content blocks.
The parsing engine can also build a model collection 222 based on the story specification 202, where the model collection 222 serves to identify and constrain the data to be processed by the system. Likewise, the parsing engine can build a derived feature collection 224, an angle collection 226, and blueprint sets 228 based on the story specification 226.
Processing logic instantiated as a result of the parsing engine 114 operating on the story specification 202 can then provide for content block selection 230. For example, when first processing data, the processing logic can select the first content block of the story specification in the content block collection 220. The processing logic can further build models for the data and compute any derived features that are necessary in view of the story specification (232 and 234). At 236, the processing logic tests the relevant angles for the subject content block in the angle collection 226. This operation can involve testing the specific data and derived features under consideration against the applicability conditions for the relevant angles. Based on which angle(s) is (are) deemed to accurately characterize the data and derived features, the processing logic can further order, filter, and select (238) one or more angles to be included in the narrative. As explained above and in the above-referenced and incorporated patents and patent applications, attributes of the subject content block and angle data structures can facilitate this decision-making.
Once the data has been modeled, the derived features have been computed, and one or more angles have been selected, the narrative generator instantiates a content block outline 240. The instantiated content block outline 240 can be a language-independent representation of the angles and features to be expressed for the section of the narrative represented by the subject content block, as described in the '844 patent.
If the story specification 202 comprises multiple content blocks, the execution can return to step 230 for the selection of the next content block for the story specification. Otherwise, the content block outline 240 is ready to be converted into human-interpretable form via blueprint sets 228.
Each content block is linked to one or more blueprint sets 228, each containing parameterizable blueprints to express the angles and/or features determined within that content block in natural language, for example English 2422, Spanish 2422, and any other desired languages such as Chinese 242n, etc. When selected and parameterized, these result in generating the actual text of the narrative in the desired language(s) (see 2441, 2442, . . . 244n).
Thus, the example embodiment of
While the example of
Any of a number of techniques can be used to implement steps 110 and 112 shown by
Referring to
At step 314, the processor populates a user interface for presentation to the user, wherein the user interface is populated with data entry fields and information based on the selected communication goal data structure. Examples of such user interfaces are discussed below in connection with
At step 316, the processor receives additional input through the user interface, where this additional user input further defines the communication goal. For example, the user input can specify the parameters that are to be addressed as part of the communication goal as well as values for these parameters. The example user interfaces of
At step 318, the processor configured the selected communication goal data structure based on the additional input received at step 318. In turn, the processor selects the content block data structure that is associated with the selected communication goal data structure (step 320), and the processor configured this selected content block data structure based on the configured communication goal data structure from step 318 (step 322). In doing so, the content block data structure becomes tailored to the user's communication goal.
As explained in connection with
Using Communication Goals to Focus Narrative Analytics and Constrain the Data Needed to Support Narrative Generation:
With example embodiments, algorithms, analysis, and data do not drive the story, but are invoked and utilized to create the story after the structure of the story has been specified according to the selected communication goals. This technique stands in contrast to others where a story is generated based solely on the data or based on some ad hoc determination. In example embodiments described herein, processing is constrained based on the specified communication goal data structure 200. In other words, the data analyzed, and the manner in which the data are analyzed, are constrained based on the communication goal(s) and the requirements of fulfilling the communication goal.
After the narrative analytics 502 gathers the data 410 for analysis to fulfill the communication goal, the system 400 may apply angles from pool 412 against the data 410 to identify which angle or angles are deemed to accurately characterize the data 410. It should be understood that data 410 may include derived features computed from input data based on the constrained narrative analytics 402. An angle whose applicability conditions are met by data 410 can then be proposed (414) for inclusion in a data assembly 416 that includes data from data 410. Applicability conditions are described in greater detail with reference to U.S. Pat. No. 8,374,848. Once the system 400 determines the data assembly 416, it is ready to automatically render the narrative from such a data assembly 416 (e.g., using blueprint sets) to create a narrative expressed in a natural language understood by a human reader. As such, the data assembly 416 can be represented by a content block outline as shown above and discussed in the '844 patent.
An example process flow 450 for generating a narrative based on constrained narrative analytics is illustrated in
At step 456, the processor 100 may receive or determine the analysis constraints 404 and the domain constraints 406. The processor 100 may receive these constraints as input from the user, or the processor 100 may be able to determine these constraints based on analysis of existing data associated with the subject communication goal. For example, in some embodiments, the selected communication goal data structure 200 may define constraints 404 and/or 406.
At step 458, the processor applies one or more of the data modeling components defined or specified by the content block referenced at step 454. These data modeling components serve to constrain and specify the nature of the data that is to be ingested when generating narratives. At step 460, the processor gathers input data in accordance with these data models. As noted, the gathered input data may only be a small subset of a larger data set.
At step 462, the processor applies one or more of the computational components defined or specified by the content block referenced at step 454. For example, these computational components may specify one or more derived features that need to be computed from the gathered input data. These computational components may also test the gathered input data and any computed derived features against the applicability conditions of any angles that are relevant for the subject narrative analytics.
At step 464, the processor will propose one or more angles that are to be expressed in the resultant narrative. These proposed angles can be combined with the gathered input data and computed derived features to create a content block outline.
Lastly, the processor at step 466 references any related communication goal data structures with respect to the communication goal structure selected at step 452 If there is not a related communication goal data structure, the process 450 ends. If there is a related communication goal data structure, the process 450 can repeat itself using the related communication goal data structure.
Due to the interrelationships between the communication goal data structures 200, selection of one communication goal 200 may define an entire story specification 202.
For example, a first communication goal may be a “describe subject status” communication goal data structure 495A for a subject. The describe subject status communication goal data structure 495A defines a model for describing the subject's overall performance in terms of available data about the subject. As part of that model, the describe subject status communication goal data structure 495A calls a first content block 461. The first content block 461 may specify a model for describing the subject's status in terms of one or more metrics and how those metrics have changed over the recent past, which may involve describing the trajectory of those metrics.
Because a reader naturally expects an explanation and evaluation of the subject's status (“Why?”, and “How good or bad is this?”), the describe subject status communication goal data structure 495A may relate to an evaluate subject status communication goal data structure 495B and an explain subject status communication goal 495C. The interrelationships among the communication goals 495A, 495B, and 495C are illustrated by the dashed lines in
The evaluate subject status communication goal data structure 495B may call, for example, a second content block 462 that specifies a model for informing the reader what the subject's status means—is the subject's current status (the reported values of its metrics and their trajectories) good or bad? Furthermore, the explain subject status communication goal data structure 495C may specify a model for explaining why the subject's status has changed. As an example, the explain subject status communication goal data structure 495C may call a third and fourth content block 463, 464 that are designed to model how the communication goal of explaining the subject's status to the reader can be fulfilled.
Because the communication goal data structure 200 serves as the first data structure accessed by the processor 100 when automatically generating a narrative, the communication goal corresponding to the accessed communication goal data structure drives the generation of the story. As mentioned previously, algorithms, analysis, and data do not drive the story, but are invoked and utilized to create the story after the structure of the story has been specified according to the selected communication goals. Again, this technique stands in contrast to others where a story is generated based solely on the data or based on some ad hoc determination. In an example embodiment described herein, processing is constrained based on the specified communication goal data structure 200. In other words, the data analyzed, and the manner in which the data are analyzed, are constrained based on the communication goal and the requirements of fulfilling the communication goal.
It should be noted that communication goal data structures 200 may be linked into a story specification 202. For example, in the performance report for a subject (e.g. a retail store) described above, the initial communication goal might be “describe the status of the store” or said differently, “how is my store performing?” With this communication goal in mind, a story structure including the describe subject status communication goal data structure 495A, the evaluate subject status communication goal data structure 495B, and the explain subject status communication goal data structure 495C may comprise the entire performance report story specification 202. The performance report narrative may include other communication goals depending on the needs of the reader. For example, the reader may want to know whether he or she can expect his/her store to improve or decline. To fulfill this narrative goal, the processor 100 may include a communication goal data structure 200 that predicts future performance. Another store owner may want help on how to improve his store. To fulfill this narrative goal, the processor 100 may include an advise communication goal data structure 200 configured to offer suggestions on how to promote or maintain improvements or prevent declines in store performance in key metrics.
After determining the communication goal data structures 200, the computer system 104 may access the referenced content blocks 206 specified by the communication goal data structure 200. The content blocks 206 of the exemplary embodiments described herein include specially-configured narrative analytics models that are capable of fulfilling the overarching communication goal of each section of the narrative and the narrative as a whole. In this way, the content blocks 206 themselves specify the data that is necessary in order to fulfill the specific narrative goal represented by the communication goal data structure 200, and the narrative analytics models referenced by the content blocks 206 may specify angles (or angle families) that capture the appropriate characterizations or analyses of data in terms of important patterns that will determine what is to be expressed in the narrative. The content blocks may also specify blueprint sets that are associated with the content blocks and the angles for use when expressing information relevant to an angle within a narrative. An example narrative analytics model for a content block 206 is shown in connection with element 206 of
In addition to receiving parameters, the narrative analytic models 502-506 define different algorithms to fulfill the communication goal. The algorithms are defined in advance so that the narrative analytic model can present information that fulfills the communication goal.
In the examples shown in
The cohort comparison narrative analytic model 504 may include algorithms that compares the top-line metric and trajectory, which was calculated by the feature-over-time narrative analytic model 502, to peers defined by a peer parameter. In other embodiments, the narrative analytics model 504 referenced by the evaluate communication goal data structure 495B may compare the calculated top-line metric and trajectory to historical values or simply to the number 0 (i.e. whether or not the profit was positive or negative). The cohort comparison narrative analytic model may further receive parameters such as thresholds, benchmarks, expectations, industry sectors, and the like.
Also, as hinted above, the results of the comparison algorithms performed by the narrative analytics 504 specified by the evaluate subject status communication goal data structure 495B determine the angle or angles 208 used to automatically generate the narrative story. If the entity's numbers are lower than its peers, the angle(s) 208 chosen by the narrative analytics model 504 differs from a situation when the retail store's numbers are better than its peers. As part of the angle(s) 208 applicability determination, the narrative analytics model 500 may need to receive a threshold that decides when a feature or change is significant. For example, one retail store may think 10% improvement in revenue is significant, whereas another retail store may think 2% improvement is significant. These thresholds may be set by a user, or by the computer system 104 evaluating historical data about the retail store or data about the retail store's peers, or through some other source of data.
The metrics and drivers narrative analytic model 506 may include algorithms to determine which drivers contributed or inhibited to the calculated top-line metric and trajectory. Such subsidiary metrics, or drivers, depend on the top-line metric calculated by the feature-over-time narrative analytic model 502. For example, drivers for revenue and profit may include statistics such as the number of units sold or a dollar amount per unit sold. These drivers can be either positively or negatively correlated with the higher-level metrics, such as profit or revenue. As another example, if the computer system 104 is reviewing the performance of a running back, the metrics and drivers narrative analytic model 506 may explain an improvement in yards gained by the running back by looking at the number of broken tackles or offensive line statistics.
The most direct type of driver is a component or sub-category of the overall metric. Returning to the retail store example, the overall metric may be total number of units sold, while the component metric may be number of clothing articles sold, total sales in accessories, total sales in cosmetics, etc. Component drivers are measured in the same units as the overall metric to which they contribute, and the sum of their values should be the value of the overall metric. Using component drivers, an explanation why total clothing sales are up could be determined by simply noticing that jean sales are up.
Another kind of driver may be an input to the top-line metric. For example, inputs to a retail store's total sale might be total number of individual customer sales, average dollar amount per customer sale, or net gain less the wholesale cost. Yet another type of driver is an influence on the overall metric. For example, bad weather may be a negative driver for a golf course's sales, but cold weather may be a positive driver for pro shop sales because a number of golfers forgot warm clothing for playing golf through the cold weather. These two drivers are not measured in the same units as the overall metric, but they have a relationship to the overall metric under evaluation. Because these drivers are not measured by the same unit as the overall metric, these drivers may need to be weighted, particularly in relation to other drivers. For example, weather may be weighted higher for a golf course's overall metric of revenue than for a retail store's overall metric of revenue.
Although not shown in
The advise communication goal data structure may receive as a domain input parameters of the situation 506 that the reader can control. For example, in the retail store content, the advise communication goal data structure should not specify a configuration that results in saying things like “sell more units.” Instead, it may specify evaluating such factors as the success of a marketing program, or whether a coupon campaign resulted in substantially more sales or revenue. Based on these analyses, the resulting narrative may recommend ceasing such marketing campaigns or continuing them. The computer system 104 may also analyze employee performance to recommend promotions or ending employment.
The second and third instantiated content blocks 514 and 516 receives some or all of the same parameters as the first content block 512 and additional parameters. For example, the second content block 514 also receives a peer group parameter so that the second content block can compare the performance of the Ford Motor Company to a parameterizable peer group, which in this example is all American automobile manufacturers. Meanwhile, the third content block 516 may receive different parameters than the second content block 514, such as contributing drivers parameters and inhibiting driver parameters. In the example shown in
Referring to
A user provides user input into the GUI 600, and the GUI displays selectable options for parameters defined by the communication goal data structures and the narrative analytic models 602. The user input may include first selecting which communication goal data structures should be accessed by the computer system 104 to generate the narrative story. In this way, the user may specify the story specification 202. The user may also use the GUI to define new communication goal data structures and new narrative analytic models.
The GUI 600 further references a structured data set 604 to determine what parameters to display. For example, the GUI 600 may determine which data is available in the structured data set 606 when rendering the menu options listed by the GUI 600. For example, if the structured data set 606 includes data about the Ford Motor Company and General Motors, the GUI 600 will display in a drop down menu those two companies as entities which the computer system 104 can write about.
Further, after selecting which communication goal data structures comprise the story specification, the user may use the GUI 600 to configure the communication goal data structures and the narrative analytics by providing values or selections for the parameters used by each communication goal data structure and each narrative analytic model. This may include, for example, drop down menus, sliders, and text input boxes, among others. The communication goal data structures may define what menus, slides, and other user-selectable options appear on the GUI 600.
Using the user inputs made through the GUI 600, the communication goal data structures and the narrative analytics models 602 receive the parameters selected through the user input, and using those parameters, the communication goal data structures and the narrative analytic models instantiate content blocks 604.
After instantiating the content blocks 604, a parsing engine 114 parses the story specification to generate the actual programmatic data structures that are directly executable by the processor. These programmatic data structures serve as part of the narrative generation engine 116. The narrative generation engine 116 is configured to process input data about which narrative is to be generated, and, based on this processing, automatically generates a narrative. This process is described in more detail above.
In
Referring to
Referring to
Referring to
The computer system 104 also supports controlling other aspects of the language, in this case the “Tone” of the generated text by changing the tone drop-down menu 814. In
In
In
Referring to
In
In
In
The exemplary GUIs 800 shown in
The exemplary embodiments are applicable to a wide range of content verticals, and the specifications delineate the nature of the data that are necessary to parameterize the narrative analytics models 500, and so to drive narrative generation. In other words, the techniques taught by the exemplary embodiments enable the development of broadly applicable narrative products that can easily be applied to new content verticals simply by specifying the appropriate data in a defined data model or format. A performance report for a retail store has frequently been discussed above and
Referring to
Referring to
Referring to
Referring to
Referring to
In sum, the same story specification, composed of the same communication goals data structures 295 and narrative analytics models 500, appropriately parameterized and supplied with relevant data, can immediately produce useful and comprehensible narratives in radically different domains.
For example, the performance report specification configured in the previous section (and used as an example throughout this discussion) can easily be applied to a new domain simply by specifying the entity whose performance is to be discussed, the top-line metrics that matter, the appropriate benchmarks for assessing these metrics, and the relevant drivers for these metrics.
By virtue of this “top-down” approach to automatically generating narrative stories, whereby narrative goals dictate the data necessary to be communicated and how to communicate the data, the exemplary embodiments support an interactive model for conveying information. In the exemplary embodiments described above, the computer anticipates the questions a reader will want answered, and hopes to answer all those questions by defining communication goal data structures created to fulfill the narrative goals that answer the anticipated reader questions.
Interactive Narrative Generation Based on Communication Goals
As mentioned above, the steps 110 and 112 of
Although operating in the user-interactive mode, the computer system 104 still operates according to the exemplary configuration and process flow illustrated in
At step 2512, the processor determines which communication goal data structure will answer the question posed by the user in step 2512. In the example of “how did the store do this week?”, the processor may determine that the describe status communication goal data structure 495A will answer this question. The processor may determine that the question relates to a specific communication goal data structure by searching for key words in the query. If the computer system provides user-selectable options, each user-selectable option may have associated metadata associating the user-selectable option with a specific communication goal data structure. In either mode, the processor determines which communication goal data structure will answer the question selected or entered by the user.
At step 2514, the processor may determine if the query entered by the user includes all the necessary parameters as required by the communication goal data structure and the narrative analytics. For example, if the processor determined in step 2512 that the describe status communication goal data structure will answer the user-entered query, the processor may determine if the query entered or selected by the user included a specified entity, a time period, and a specified top-line metric. If the user entered the query “how is the Ford Motor company doing?”, the processor may determine that the user has only entered a parameter for the entity. If the processor determines that one or more parameters are missing, the processor may prompt the user for the missing parameters (step 2516). An example of such a prompt is illustrated in
Referring again to
For example, the computer system 104 may generate a narrative aimed at describing, evaluating, and explaining the performance of a retail store, consider the following plausible, if hypothetical, dialog:
In the dialog above, all the communication goals and relevant narrative analytics expressed in the example retail store sales narrative are fulfilled—but they are fulfilled at the explicit request of the user, expressed in questions that invoke a relevant narrative goal. In the above dialog the computer system 104 waits for the reader to explicitly express the question before answering it. Below is an interactive dialogue in a different domain: a person's weight. The computer system 104 may be configured to understand that weight as a metric changes over time, and that these changes are impacted by food intake, exercise, and sleep. Food intake in turn is influenced by what a person eats, and how much he or she eats. What a person eats, in turn, is influenced by where he or she eats. Finally, people have targets for their weight. Suppose the computer system 104 reports that a user weighs 190 pounds. Once this has been conveyed, the possible follow-up questions can be anticipated. The user might want to know if this is good in absolute terms, or whether it is going up or down. He or she might want to know if this is good in comparison to his or her goals or to an external benchmark. Or, the user might want to know how he or she is doing against a cohort—for example, other people who are dieting—either in terms of absolutes (the weight itself) or deltas (the direction and amount of change). With access to appropriate data from a variety of online sources, the resulting dialog might look something like this:
In a dynamic and interactive dialog, these communication goals are fulfilled incrementally: it is only when the user requests certain information through an explicit question that the relevant narrative analytics to address that question are invoked and the results conveyed. In one embodiment, an interactive dialog, as indicated above, is generated ahead of time in the form of snippets that fulfill all the narrative goals that can be expected given the nature and purpose of the communication. However, rather than putting these all together into a single, structured narrative for presentation to the user, instead they can be held in abeyance until the specific communication goal they fulfill is explicitly indicated by a user's question.
In another embodiment, the computer system 104 does not consider communication goal data structures 295 until the user indicates that he or she wishes to have a question answered through the communication goal data structures 295. The user's expressions of interest are compared with the set of expected goals, as described earlier, and the appropriate goal is selected. The set of potentially relevant and expected communication goals is then updated based on the goal that has been fulfilled. The power of such a dynamic and recursive model is that the total set of questions of interest to the user, and corresponding communication goals, narrative analytics, and relevant data, need not be fixed in advance, but may grow in response to the user's interests as they arise and are conveyed interactively. In such an approach, once a communication goal has been fulfilled, the related communication goals, and the narrative analytics models they entail, are made available for possible invocation.
Subsequently, the computer system interprets the input to decide which communication goal data structure answers the user's question or addresses the user's input (step 2512). That is, the computer system 104 selects which communication goal data structure 295 fulfills the narrative goal expressed by the user.
Step 2504 may be accomplished in different ways depending on how the input was received. For example, if the interactive interface has a finite number of pre-selectable options available, such options may be mapped to a specific communication goal data structure. For example, a first screen may illustrate the describe, evaluate, and explain subject status communication goal data structures, but they may be represented in natural language. For example, the GUI on a first screen may show: Button 1: Do you want to know about the store's status? Button 2: do you want to know why the store is doing so well/poorly? Button 3: do you want to know how the store compares? Selection of one button may generate a submenu where new options are available. For example, if the user clicked Button one that will tell the user how the store is doing, the submenu may allow the user to select various top-line metrics or various time frames.
Alternatively, the computer system 104 may receive natural language inputs from the user, as in the interactive examples shown above. In order to understand and interpret the words input by the user, the computer system 104 may look for keywords that correspond to the communication goal data structures. For example, if the user writes or speaks “What were the profits for this week?,” the computer system 104 may recognize the word profit as a known top-line metric and the word week as a known time frame. Finding these two inputs, the computer system 104 may determine that the user is interested in the describe subject status communication goal data structure 495A. As another example, if the user enters the word “why” the computer may realize it needs to explain something, wherein the something depends on a domain recognized by the computer. These inputs may be context sensitive. For example, after providing a narrative fulfilling the narrative goal of describing the store's status, if the user simply enters the word “why,” the computer system 104 may assume this question is in the same domain as the information it just presented. Thus, the computer system 104 may explain why the status is as such.
After the computer system 104 determines the correct communication goal, the computer system performs steps 2514-2522 described above with reference to
Optionally, the computer system 104 may tailor the text presented to the user based on the input. For example, if the user asked a yes or no question, the computer system 104 may first add the word “yes” or “no” before presenting the text generated by the narrative analytics model 500.
In view of the foregoing, it will be seen that the several advantages of the invention are achieved and attained.
The embodiments were chosen and described in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated.
As various modifications could be made in the constructions and methods herein described and illustrated without departing from the scope of the invention, it is intended that all matter contained in the foregoing description or shown in the accompanying drawings shall be interpreted as illustrative rather than limiting. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims appended hereto and their equivalents.
This patent application is a continuation of U.S. application Ser. No. 17/000,516, filed on Aug. 24, 2020, now U.S. Pat. No. 11,501,220, which is a continuation of U.S. application Ser. No. 15/977,141, filed on May 11, 2018, now U.S. Pat. No. 10,755,042, which is a continuation of U.S. application Ser. No. 14/570,834, filed on Dec. 15, 2014, now U.S. Pat. No. 9,977,773, which is a continuation of U.S. application Ser. No. 14/521,264, filed on Oct. 22, 2014, now U.S. Pat. No. 9,720,899, the entire disclosures of each of which are incorporated herein by reference. This patent application is related to U.S. Pat. Nos. 8,355,903, 8,374,848, 8,630,844, 8,688,434, and 8,775,161, the entire disclosures of each of which are incorporated herein by reference. This patent application is also related to the following U.S. patent applications: (1) U.S. application Ser. No. 12/986,972, filed Jan. 7, 2011, (2) U.S. application Ser. No. 12/986,981, filed Jan. 7, 2011, (3) U.S. application Ser. No. 12/986,996, filed Jan. 7, 2011, (4) U.S. application Ser. No. 13/186,329, filed Jul. 19, 2011, (5) U.S. application Ser. No. 13/186,337, filed Jul. 19, 2011, (6) U.S. application Ser. No. 13/186,346, filed Jul. 19, 2011, (7) U.S. application Ser. No. 13/464,635, filed May 4, 2012, (8) U.S. application Ser. No. 13/464,675, filed May 4, 2012, (9) U.S. application Ser. No. 13/738,560, filed Jan. 10, 2013, (10) U.S. application Ser. No. 13/738,609, filed Jan. 10, 2013, (11) U.S. App. Ser. No. 61/799,328, filed Mar. 15, 2013, (12) U.S. application Ser. No. 14/090,021, filed Nov. 26, 2013, and (13) U.S. application Ser. No. 14/211,444, filed Mar. 14, 2014, the entire disclosures of each of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4992939 | Tyler | Feb 1991 | A |
5619631 | Schott | Apr 1997 | A |
5734916 | Greenfield et al. | Mar 1998 | A |
5802495 | Goltra | Sep 1998 | A |
6006175 | Holzrichter | Dec 1999 | A |
6144938 | Surace et al. | Nov 2000 | A |
6278967 | Akers et al. | Aug 2001 | B1 |
6289363 | Consolatti et al. | Sep 2001 | B1 |
6622152 | Sinn et al. | Sep 2003 | B1 |
6651218 | Adler et al. | Nov 2003 | B1 |
6757362 | Cooper et al. | Jun 2004 | B1 |
6771290 | Hoyle | Aug 2004 | B1 |
6810111 | Hunter | Oct 2004 | B1 |
6917936 | Cancedda | Jul 2005 | B2 |
6968316 | Hamilton | Nov 2005 | B1 |
6976031 | Toupal et al. | Dec 2005 | B1 |
7027974 | Busch et al. | Apr 2006 | B1 |
7246315 | Andrieu et al. | Jul 2007 | B1 |
7324936 | Saldanha et al. | Jan 2008 | B2 |
7333967 | Bringsjord et al. | Feb 2008 | B1 |
7496621 | Pan et al. | Feb 2009 | B2 |
7577634 | Ryan et al. | Aug 2009 | B2 |
7610279 | Budzik et al. | Oct 2009 | B2 |
7617199 | Budzik et al. | Nov 2009 | B2 |
7617200 | Budzik et al. | Nov 2009 | B2 |
7627565 | Budzik et al. | Dec 2009 | B2 |
7644072 | Budzik et al. | Jan 2010 | B2 |
7657518 | Budzik et al. | Feb 2010 | B2 |
7716116 | Schiller | May 2010 | B2 |
7778895 | Baxter et al. | Aug 2010 | B1 |
7818676 | Baker | Oct 2010 | B2 |
7836010 | Hammond et al. | Nov 2010 | B2 |
7840448 | Musgrove et al. | Nov 2010 | B2 |
7856390 | Schiller | Dec 2010 | B2 |
7865496 | Schiller | Jan 2011 | B1 |
7930169 | Billerey-Mosier | Apr 2011 | B2 |
8046226 | Soble et al. | Oct 2011 | B2 |
8311863 | Kemp | Nov 2012 | B1 |
8355903 | Birnbaum et al. | Jan 2013 | B1 |
8374848 | Birnbaum et al. | Feb 2013 | B1 |
8447604 | Chang | May 2013 | B1 |
8463695 | Schiller | Jun 2013 | B2 |
8494944 | Schiller | Jul 2013 | B2 |
8515737 | Allen | Aug 2013 | B2 |
8630844 | Nichols et al. | Jan 2014 | B1 |
8630912 | Seki et al. | Jan 2014 | B2 |
8630919 | Baran et al. | Jan 2014 | B2 |
8645124 | Karov Zangvil | Feb 2014 | B2 |
8676691 | Schiller | Mar 2014 | B2 |
8688434 | Birnbaum et al. | Apr 2014 | B1 |
8751563 | Warden et al. | Jun 2014 | B1 |
8762133 | Reiter | Jun 2014 | B2 |
8762134 | Reiter | Jun 2014 | B2 |
8775161 | Nichols et al. | Jul 2014 | B1 |
8812311 | Weber | Aug 2014 | B2 |
8819001 | Zhang | Aug 2014 | B1 |
8843363 | Birnbaum et al. | Sep 2014 | B2 |
8886520 | Nichols et al. | Nov 2014 | B1 |
8892417 | Nichols et al. | Nov 2014 | B1 |
8892419 | Lundberg et al. | Nov 2014 | B2 |
8903711 | Lundberg et al. | Dec 2014 | B2 |
9135244 | Reiter | Sep 2015 | B2 |
9208147 | Nichols et al. | Dec 2015 | B1 |
9244894 | Dale et al. | Jan 2016 | B1 |
9251134 | Birnbaum et al. | Feb 2016 | B2 |
9323743 | Reiter | Apr 2016 | B2 |
9336193 | Logan et al. | May 2016 | B2 |
9355093 | Reiter | May 2016 | B2 |
9396168 | Birnbaum et al. | Jul 2016 | B2 |
9396181 | Sripada et al. | Jul 2016 | B1 |
9396758 | Oz et al. | Jul 2016 | B2 |
9405448 | Reiter | Aug 2016 | B2 |
9424254 | Howald et al. | Aug 2016 | B2 |
9430557 | Bhat et al. | Aug 2016 | B2 |
9460075 | Mungi et al. | Oct 2016 | B2 |
9483520 | Reiner et al. | Nov 2016 | B1 |
9529795 | Kondadadi et al. | Dec 2016 | B2 |
9536049 | Brown et al. | Jan 2017 | B2 |
9576009 | Hammond et al. | Feb 2017 | B1 |
9594756 | Sabharwal | Mar 2017 | B2 |
9665259 | Lee et al. | May 2017 | B2 |
9697197 | Birnbaum et al. | Jul 2017 | B1 |
9697492 | Birnbaum et al. | Jul 2017 | B1 |
9720899 | Birnbaum et al. | Aug 2017 | B1 |
9767145 | Prophete et al. | Sep 2017 | B2 |
9870629 | Cardno et al. | Jan 2018 | B2 |
9875494 | Kalns et al. | Jan 2018 | B2 |
9946711 | Reiter et al. | Apr 2018 | B2 |
9971967 | Bufe, III et al. | May 2018 | B2 |
9977773 | Birnbaum et al. | May 2018 | B1 |
9990337 | Birnbaum et al. | Jun 2018 | B2 |
10019512 | Boyle et al. | Jul 2018 | B2 |
10037377 | Boyle et al. | Jul 2018 | B2 |
10049152 | Ajmera et al. | Aug 2018 | B2 |
10095692 | Song et al. | Oct 2018 | B2 |
10101889 | Prophete et al. | Oct 2018 | B2 |
10185477 | Paley et al. | Jan 2019 | B1 |
10482381 | Nichols et al. | Nov 2019 | B2 |
10572606 | Paley et al. | Feb 2020 | B1 |
10579835 | Phillips et al. | Mar 2020 | B1 |
10585983 | Paley et al. | Mar 2020 | B1 |
10657201 | Nichols et al. | May 2020 | B1 |
10698585 | Kraljic et al. | Jun 2020 | B2 |
10699079 | Paley et al. | Jun 2020 | B1 |
10706236 | Platt et al. | Jul 2020 | B1 |
10747823 | Birnbaum et al. | Aug 2020 | B1 |
10755042 | Birnbaum et al. | Aug 2020 | B2 |
10755046 | Lewis Meza et al. | Aug 2020 | B1 |
10762304 | Paley et al. | Sep 2020 | B1 |
10853583 | Platt et al. | Dec 2020 | B1 |
10943069 | Paley et al. | Mar 2021 | B1 |
10956656 | Birnbaum et al. | Mar 2021 | B2 |
10963649 | Sippel et al. | Mar 2021 | B1 |
10990767 | Smathers et al. | Apr 2021 | B1 |
11003866 | Sippel et al. | May 2021 | B1 |
11023689 | Sippel et al. | Jun 2021 | B1 |
11030408 | Meza et al. | Jun 2021 | B1 |
11042708 | Pham et al. | Jun 2021 | B1 |
11042709 | Pham et al. | Jun 2021 | B1 |
11042713 | Platt et al. | Jun 2021 | B1 |
11068661 | Nichols et al. | Jul 2021 | B1 |
11126798 | Lewis Meza et al. | Sep 2021 | B1 |
11144838 | Platt et al. | Oct 2021 | B1 |
11170038 | Platt et al. | Nov 2021 | B1 |
11182556 | Lewis Meza et al. | Nov 2021 | B1 |
11188588 | Platt et al. | Nov 2021 | B1 |
11222184 | Platt et al. | Jan 2022 | B1 |
11232268 | Platt et al. | Jan 2022 | B1 |
11232270 | Platt et al. | Jan 2022 | B1 |
11238090 | Platt et al. | Feb 2022 | B1 |
11288328 | Birnbaum et al. | Mar 2022 | B2 |
11334726 | Platt et al. | May 2022 | B1 |
11341330 | Smathers et al. | May 2022 | B1 |
11341338 | Platt et al. | May 2022 | B1 |
11475076 | Birnbaum et al. | Oct 2022 | B2 |
11501220 | Birnbaum et al. | Nov 2022 | B2 |
11521079 | Nichols et al. | Dec 2022 | B2 |
11561684 | Paley et al. | Jan 2023 | B1 |
11561986 | Sippel et al. | Jan 2023 | B1 |
11562146 | Paley et al. | Jan 2023 | B2 |
20020046018 | Marcu et al. | Apr 2002 | A1 |
20020083025 | Robarts et al. | Jun 2002 | A1 |
20020107721 | Darwent et al. | Aug 2002 | A1 |
20030004706 | Yale et al. | Jan 2003 | A1 |
20030061029 | Shaket | Mar 2003 | A1 |
20030110186 | Markowski et al. | Jun 2003 | A1 |
20030182102 | Corston-Oliver et al. | Sep 2003 | A1 |
20030216905 | Chelba et al. | Nov 2003 | A1 |
20040015342 | Garst | Jan 2004 | A1 |
20040029977 | Kawa et al. | Feb 2004 | A1 |
20040034520 | Langkilde-Geary et al. | Feb 2004 | A1 |
20040083092 | Valles | Apr 2004 | A1 |
20040138899 | Birnbaum et al. | Jul 2004 | A1 |
20040174397 | Cereghini et al. | Sep 2004 | A1 |
20040225651 | Musgrove et al. | Nov 2004 | A1 |
20040255232 | Hammond et al. | Dec 2004 | A1 |
20050027704 | Hammond et al. | Feb 2005 | A1 |
20050028156 | Hammond et al. | Feb 2005 | A1 |
20050033582 | Gadd et al. | Feb 2005 | A1 |
20050049852 | Chao | Mar 2005 | A1 |
20050125213 | Chen et al. | Jun 2005 | A1 |
20050137854 | Cancedda et al. | Jun 2005 | A1 |
20050273362 | Harris et al. | Dec 2005 | A1 |
20060031182 | Ryan et al. | Feb 2006 | A1 |
20060101335 | Pisciottano | May 2006 | A1 |
20060165040 | Rathod | Jul 2006 | A1 |
20060181531 | Goldschmidt | Aug 2006 | A1 |
20060212446 | Hammond et al. | Sep 2006 | A1 |
20060271535 | Hammond et al. | Nov 2006 | A1 |
20060277168 | Hammond et al. | Dec 2006 | A1 |
20070132767 | Wright et al. | Jun 2007 | A1 |
20070185846 | Budzik et al. | Aug 2007 | A1 |
20070185847 | Budzik et al. | Aug 2007 | A1 |
20070185861 | Budzik et al. | Aug 2007 | A1 |
20070185862 | Budzik et al. | Aug 2007 | A1 |
20070185863 | Budzik et al. | Aug 2007 | A1 |
20070185864 | Budzik et al. | Aug 2007 | A1 |
20070185865 | Budzik et al. | Aug 2007 | A1 |
20070250479 | Lunt et al. | Oct 2007 | A1 |
20070250826 | O'Brien | Oct 2007 | A1 |
20080005677 | Thompson | Jan 2008 | A1 |
20080198156 | Jou et al. | Aug 2008 | A1 |
20080243285 | Reichhart | Oct 2008 | A1 |
20080250070 | Abdulla et al. | Oct 2008 | A1 |
20080256066 | Zuckerman et al. | Oct 2008 | A1 |
20080304808 | Newell et al. | Dec 2008 | A1 |
20080306882 | Schiller | Dec 2008 | A1 |
20080313130 | Hammond et al. | Dec 2008 | A1 |
20090019013 | Tareen et al. | Jan 2009 | A1 |
20090030899 | Tareen et al. | Jan 2009 | A1 |
20090049041 | Tareen et al. | Feb 2009 | A1 |
20090083288 | LeDain et al. | Mar 2009 | A1 |
20090089100 | Nenov et al. | Apr 2009 | A1 |
20090119095 | Beggelman et al. | May 2009 | A1 |
20090119584 | Herbst | May 2009 | A1 |
20090144608 | Oisel et al. | Jun 2009 | A1 |
20090157664 | Wen | Jun 2009 | A1 |
20090175545 | Cancedda et al. | Jul 2009 | A1 |
20090248399 | Au | Oct 2009 | A1 |
20100146393 | Land et al. | Jun 2010 | A1 |
20100161541 | Covannon et al. | Jun 2010 | A1 |
20100185984 | Wright et al. | Jul 2010 | A1 |
20100241620 | Manister et al. | Sep 2010 | A1 |
20100325107 | Kenton et al. | Dec 2010 | A1 |
20110022941 | Osborne et al. | Jan 2011 | A1 |
20110044447 | Morris et al. | Feb 2011 | A1 |
20110077958 | Breitenstein et al. | Mar 2011 | A1 |
20110078105 | Wallace | Mar 2011 | A1 |
20110087486 | Schiller | Apr 2011 | A1 |
20110099184 | Symington | Apr 2011 | A1 |
20110113315 | Datha et al. | May 2011 | A1 |
20110113334 | Joy et al. | May 2011 | A1 |
20110182283 | Van Buren et al. | Jul 2011 | A1 |
20110191417 | Rathod | Aug 2011 | A1 |
20110246182 | Allen | Oct 2011 | A1 |
20110249953 | Suri et al. | Oct 2011 | A1 |
20110288852 | Dymetman et al. | Nov 2011 | A1 |
20110295903 | Chen | Dec 2011 | A1 |
20110311144 | Tardif | Dec 2011 | A1 |
20110314381 | Fuller et al. | Dec 2011 | A1 |
20120011428 | Chisholm | Jan 2012 | A1 |
20120041903 | Beilby et al. | Feb 2012 | A1 |
20120069131 | Abelow | Mar 2012 | A1 |
20120109637 | Merugu et al. | May 2012 | A1 |
20120158850 | Harrison et al. | Jun 2012 | A1 |
20120203623 | Sethi | Aug 2012 | A1 |
20120265531 | Bennett | Oct 2012 | A1 |
20120310699 | McKenna et al. | Dec 2012 | A1 |
20130041677 | Nusimow et al. | Feb 2013 | A1 |
20130091031 | Baran et al. | Apr 2013 | A1 |
20130096947 | Shah et al. | Apr 2013 | A1 |
20130145242 | Birnbaum et al. | Jun 2013 | A1 |
20130174026 | Locke | Jul 2013 | A1 |
20130185051 | Buryak et al. | Jul 2013 | A1 |
20130187926 | Silverstein et al. | Jul 2013 | A1 |
20130211855 | Eberle et al. | Aug 2013 | A1 |
20130246300 | Fischer et al. | Sep 2013 | A1 |
20130246934 | Wade et al. | Sep 2013 | A1 |
20130262092 | Wasick | Oct 2013 | A1 |
20130275121 | Tunstall-Pedoe | Oct 2013 | A1 |
20130304507 | Dail et al. | Nov 2013 | A1 |
20140040312 | Gorman et al. | Feb 2014 | A1 |
20140046891 | Banas | Feb 2014 | A1 |
20140075004 | Van Dusen et al. | Mar 2014 | A1 |
20140100844 | Stieglitz et al. | Apr 2014 | A1 |
20140129942 | Rathod | May 2014 | A1 |
20140134590 | Hiscock Jr. | May 2014 | A1 |
20140149107 | Schilder | May 2014 | A1 |
20140163962 | Castelli et al. | Jun 2014 | A1 |
20140200878 | Mylonakis et al. | Jul 2014 | A1 |
20140356833 | Sabczynski et al. | Dec 2014 | A1 |
20140372850 | Campbell et al. | Dec 2014 | A1 |
20140375466 | Reiter | Dec 2014 | A1 |
20150032730 | Cialdea, Jr. et al. | Jan 2015 | A1 |
20150049951 | Chaturvedi et al. | Feb 2015 | A1 |
20150078232 | Djinki et al. | Mar 2015 | A1 |
20150134694 | Burke et al. | May 2015 | A1 |
20150142704 | London | May 2015 | A1 |
20150161997 | Wetsel et al. | Jun 2015 | A1 |
20150169548 | Reiter | Jun 2015 | A1 |
20150178386 | Oberkampf et al. | Jun 2015 | A1 |
20150186504 | Gorman et al. | Jul 2015 | A1 |
20150199339 | Mirkin et al. | Jul 2015 | A1 |
20150227508 | Howald et al. | Aug 2015 | A1 |
20150227588 | Shapira et al. | Aug 2015 | A1 |
20150242384 | Reiter | Aug 2015 | A1 |
20150249584 | Cherifi et al. | Sep 2015 | A1 |
20150261745 | Song et al. | Sep 2015 | A1 |
20150286747 | Anastasakos et al. | Oct 2015 | A1 |
20150324347 | Bradshaw et al. | Nov 2015 | A1 |
20150324351 | Sripada et al. | Nov 2015 | A1 |
20150324374 | Sripada et al. | Nov 2015 | A1 |
20150325000 | Sripada | Nov 2015 | A1 |
20150331850 | Ramish | Nov 2015 | A1 |
20150332665 | Mishra et al. | Nov 2015 | A1 |
20150347400 | Sripada | Dec 2015 | A1 |
20150347901 | Cama et al. | Dec 2015 | A1 |
20150363364 | Sripada | Dec 2015 | A1 |
20150370778 | Tremblay et al. | Dec 2015 | A1 |
20160019200 | Allen | Jan 2016 | A1 |
20160026253 | Bradski et al. | Jan 2016 | A1 |
20160027125 | Bryce | Jan 2016 | A1 |
20160062604 | Kraljic et al. | Mar 2016 | A1 |
20160132489 | Reiter | May 2016 | A1 |
20160140090 | Dale et al. | May 2016 | A1 |
20160217133 | Reiter et al. | Jul 2016 | A1 |
20160232152 | Mahamood | Aug 2016 | A1 |
20170060857 | Imbruce et al. | Mar 2017 | A1 |
20170061093 | Amarasingham et al. | Mar 2017 | A1 |
20170125015 | Dielmann et al. | May 2017 | A1 |
20170185674 | Tonkin et al. | Jun 2017 | A1 |
20170199928 | Zhao et al. | Jul 2017 | A1 |
20180285324 | Birnbaum et al. | Oct 2018 | A1 |
20210192132 | Birnbaum et al. | Jun 2021 | A1 |
20210192144 | Paley et al. | Jun 2021 | A1 |
20220114206 | Platt et al. | Apr 2022 | A1 |
20230027421 | Birnbaum et al. | Jan 2023 | A1 |
Number | Date | Country |
---|---|---|
9630844 | Oct 1996 | WO |
2006122329 | Nov 2006 | WO |
2014035400 | Mar 2014 | WO |
2014035402 | Mar 2014 | WO |
2014035403 | Mar 2014 | WO |
2014035406 | Mar 2014 | WO |
2014035407 | Mar 2014 | WO |
2014035447 | Mar 2014 | WO |
2014070197 | May 2014 | WO |
2014076524 | May 2014 | WO |
2014076525 | May 2014 | WO |
2014102568 | Jul 2014 | WO |
2014102569 | Jul 2014 | WO |
2014111753 | Jul 2014 | WO |
2015028844 | Mar 2015 | WO |
2015159133 | Oct 2015 | WO |
Entry |
---|
Prosecution history for U.S. Appl. No. 17/000,516, filed Aug. 24, 2020. |
Weston et al., “A Framework for Constructing Semantically Composable Feature Models from Natural Language Requirements”, SPLC '09: Proceedings of the 13th International Software Product Line Conference, Aug. 2009, p. 211-220. |
Prosecution History for U.S. Appl. No. 12/779,668, now U.S. Pat. No. 8,374,848, filed May 13, 2010. |
Prosecution History for U.S. Appl. No. 12/779,683, now U.S. Pat. No. 8,355,903, filed May 13, 2010. |
Prosecution History for U.S. Appl. No. 13/186,308, now U.S. Pat. No. 8,775, 161, filed Jul. 19, 2011. |
Prosecution History for U.S. Appl. No. 13/186,329, now U.S. Pat. No. 8,892,417, filed Jul. 19, 2011. |
Prosecution History for U.S. Appl. No. 13/186,337, now U.S. Pat. No. 8,886,520, filed Jul. 19, 2011. |
Prosecution History for U.S. Appl. No. 13/186,346, filed Jul. 19, 2011. |
Prosecution History for U.S. Appl. No. 13/464,635, filed May 4, 2012. |
Prosecution History for U.S. Appl. No. 13/464,675, now U.S. Pat. No. 10,657,201, filed May 4, 2012. |
Prosecution History for U.S. Appl. No. 13/464,716, now U.S. Pat. No. 8,630,844, filed May 4, 2012. |
Prosecution History for U.S. Appl. No. 13/738,560, now U.S. Pat. No. 8,843,363 filed Jan. 10, 2013. |
Prosecution History for U.S. Appl. No. 13/738,609, now U.S. Pat. No. 9,251,134 filed Jan. 10, 2013. |
Prosecution History for U.S. Appl. No. 14/090,021, now U.S. Pat. No. 9,208, 147, filed Nov. 26, 2013. |
Prosecution History for U.S. Appl. No. 14/626,966, filed Feb. 20, 2015. |
Prosecution History for U.S. Appl. No. 15/011,743, filed Feb. 1, 2016. |
Reiter et al., “Building Applied Natural Generation Systems”, Cambridge University Press, 1995, pp. 1-32. |
Reiter, E. (2007). An architecture for Data-To-Text systems. In: Busemann, Stephan (Ed.), Proceedings of the 11th European Workshop on Natural Language Generation, pp. 97-104. |
Reiter, E., Gatt, A., Portet, F., and van der Meulen, M. (2008). The importance of narrative and other lessons from an evaluation of an NLG system that summarises clinical data. Proceedings of the 5th International Conference on Natural Language Generation. |
Reiter, E., Sripada, S., Hunter, J., Yu, J., and Davy, I. (2005). Choosing words in computer-generated weather forecasts. Artificial Intelligence, 167:137-169. |
Response to Office Action for U.S. Appl. No. 13/464,635 dated Jun. 4, 2015. |
Riedl et al., “From Linear Story Generation to Branching Story Graphs”, IEEE Computer Graphics and Applications, 2006, pp. 23-31. |
Riedl et al., “Narrative Planning: Balancing Plot and Character”, Journal of Artificial Intelligence Research, 2010, pp. 217-268, vol. 39. |
Roberts et al., “Lessons on Using Computationally Generated Influence for Shaping Narrative Experiences”, IEEE Transactions on Computational Intelligence and AI in Games, Jun. 2014, pp. 188-202, vol. 6, No. 2, doi: 10.1109/TCIAIG .2013.2287154. |
Robin, J. (1996). Evaluating the portability of revision rules for incremental summary generation. Paper presented at Proceedings of the 34th. Annual Meeting of the Association for Computational Linguistics (ACL'96), Santa Cruz, CA. |
Rui, Y., Gupta, A., and Acero, A. 2000. Automatically extracting highlights for TV Baseball programs. In Proceedings of the eighth ACM international conference on Multimedia. (Marina del Rey, California, United States). ACM Press, New York, NY 105-115. |
Segel et al., “Narrative Visualization: Telling Stories with Data”, Stanford University, Oct. 2010, 10 pgs. |
Sripada, S., Reiter, E., and Davy, I. (2003). SumTime-Mousam: Configurable Marine Weather Forecast Generator. Expert Update 6(3):4-10. |
Storyview, Screenplay Systems, 2000, user manual. |
Theune, M., Klabbers, E., Odijk, J., dePijper, J., and Krahmer, E. (2001) “From Data to Speech: A General Approach”, Natural Language Engineering 7(1): 47-86. |
Thomas, K., and Sripada, S. (2007). Atlas.txt: Linking Geo-referenced Data to Text for NLG. Paper presented at Proceedings of the 2007 European Natural Language Generation Workshop (ENLGO7). |
Thomas, K., and Sripada, S. (2008). What's in a message? Interpreting Geo-referenced Data for the Visually-impaired. Proceedings of the Int. conference on NLG. |
Thomas, K., Sumegi, L., Ferres, L., and Sripada, S. (2008). Enabling Access to Geo-referenced Information: Atlas.txt. Proceedings of the Cross-disciplinary Conference on Web Accessibility. |
Van der Meulen, M., Logie, R., Freer, Y., Sykes, C., McIntosh, N., and Hunter, J. (2008). When a Graph is Poorer than 100 Words: A Comparison of Computerised Natural Language Generation, Human Generated Descriptions and Graphical Displays in Neonatal Intensive Care. Applied Cognitive Psychology. |
Yu, J., Reiter, E., Hunter, J., and Mellish, C. (2007). Choosing the content of textual summaries of large time-series data sets. Natural Language Engineering, 13:25-49. |
Yu, J., Reiter, E., Hunter, J., and Sripada, S. (2003). Sumtime-Turbine: A Knowledge-Based System to Communicate Time Series Data in the Gas Turbine Domain. In P Chung et al. (Eds) Developments in Applied Artificial Intelligence: Proceedings of IEA/AIE-2003, pp. 379-384. Springer (LNAI 2718). |
Allen et al., “StatsMonkey: A Data-Driven Sports Narrative Writer”, Computational Models of Narrative: Papers from the AAAI Fall Symposium, Nov. 2010, 2 pages. |
Andersen, P., Hayes, P., Huettner, A., Schmandt, L., Nirenburg, I., and Weinstein, S. (1992). Automatic extraction of facts from press releases to generate news stories. In Proceedings of the third conference on Applied natural language processing. (Trento, Italy). ACM Press, New York, NY, 170-177. |
Andre, E., Herzog, G., & Rist, T. (1988). On the simultaneous interpretation of real world image sequences and their natural language description: the system SOCCER. Paper presented at Proceedings of the 8th. European Conference on Artificial Intelligence (ECAI), Munich. |
Asset Economics, Inc. (Feb. 11, 2011). |
Bailey, P. (1999). Searching for Storiness: Story-Generation from a Reader's Perspective. AAAI Technical Report FS-99-01. |
Bethem, T., Burton, J., Caldwell, T., Evans, M., Kittredge, R., Lavoie, B., and Werner, J. (2005). Generation of Real-time Narrative Summaries for Real-time Water Levels and Meteorological Observations in PORTS®. In Proceedings of the Fourth Conference on Artificial Intelligence Applications to Environmental Sciences (AMS-2005), San Diego, California. |
Bourbeau, L., Carcagno, D., Goldberg, E., Kittredge, R., & Polguere, A. (1990). Bilingual generation of weather forecasts in an operations environment. Paper presented at Proceedings of the 13th International Conference on Computational Linguistics (COLING), Helsinki, Finland, pp. 318-320. |
Boyd, S. (1998). TREND: a system for generating intelligent descriptions of time series data. Paper presented at Proceedings of the IEEE international conference on intelligent processing systems (ICIPS-1998). |
Character Writer Version 3.1, Typing Chimp Software LLC, 2012, screenshots from working program, pp. 1-19. |
Dehn, N. (1981). Story generation after TALE-SPIN. In Proceedings of the Seventh International Joint Conference on Artificial Intelligence. (Vancouver, Canada). |
Dramatica Pro version 4, Write Brothers, 1993-2006, user manual. |
Gatt, A., and Portet, F. (2009). Text content and task performance in the evaluation of a Natural Language Generation System. Proceedings of the Conference on Recent Advances in Natural Language Processing (RANLP-09). |
Gatt, A., Portet, F., Reiter, E., Hunter, J., Mahamood, S., Moncur, W., and Sripada, S. (2009). From data to text in the Neonatal Intensive Care Unit: Using NLG technology for decision support and information management. AI Communications 22, pp. 153-186. |
Glahn, H. (1970). Computer-produced worded forecasts. Bulletin of the American Meteorological Society, 51(12), 1126-1131. |
Goldberg, E., Driedger, N., & Kittredge, R. (1994). Using Natural-Language Processing to Produce Weather Forecasts. IEEE Expert, 9 (2), 45. |
Hargood, C., Millard, D. and Weal, M. (2009) Exploring the Importance of Themes in Narrative Systems. |
Hargood, C., Millard, D. and Weal, M. (2009). Investigating a Thematic Approach to Narrative Generation, 2009. |
Hunter, J., Freer, Y., Gatt, A., Logie, R., McIntosh, N., van der Meulen, M., Portet, F., Reiter, E., Sripada, S., and Sykes, C. (2008). Summarising Complex ICU Data in Natural Language. AMIA 2008 Annual Symposium Proceedings, pp. 323-327. |
Hunter, J., Gatt, A., Portet, F., Reiter, E., and Sripada, S. (2008). Using natural language generation technology to improve information flows in intensive care units. Proceedings of the 5th Conference on Prestigious Applications of Intelligent Systems, PAIS-08. |
Kittredge, R., and Lavoie, B. (1998). MeteoCogent: A Knowledge-Based Tool For Generating Weather Forecast Texts. In Proceedings of the American Meteorological Society AI Conference (AMS-98), Phoenix, Arizona. |
Kittredge, R., Polguere, A., & Goldberg, E. (1986). Synthesizing weather reports from formatted data. Paper presented at Proceedings of the 11th International Conference on Computational Linguistics, Bonn, Germany, pp. 563-565. |
Kukich, K. (1983). Design of a Knowledge-Based Report Generator. Proceedings of the 21st Conference of the Association for Computational Linguistics, Cambridge, MA, pp. 145-150. |
Kukich, K. (1983). Knowledge-Based Report Generation: A Technique for Automatically Generating Natural Language Reports from Databases. Paper presented at Proceedings of the Sixth International ACM SIGIR Conference, Washington, DC. |
Mack et al., “A Framework for Metrics in Large Complex Systems”, IEEE Aerospace Conference Proceedings, 2004, pp. 3217-3228, vol. 5, doi: 10.1109/AERO .2004.1368127. |
Mahamood et al., “Generating Annotated Graphs Using the NLG Pipeline Architecture”, Proceedings of the 8th International Natural Language Generation Conference (INLG), 2014. |
McKeown, K., Kukich, K., & Shaw, J. (1994). Practical issues in automatic documentation generation. 4th Conference on Applied Natural Language Processing, Stuttgart, Germany, pp. 7-14. |
Meehan, James R., Tale-Spin. (1977). An Interactive Program that Writes Stories. In Proceedings of the Fifth International Joint Conference on Artificial Intelligence. |
Memorandum Opinion and Order for O2 Media, LLC v. Narrative Science Inc., Case 1:15-cv-05129 (N.D. IL), Feb. 25, 2016, 25 pages (invalidating claims of U.S. Pat. Nos. 7,856,390, 8,494,944, and 8,676,691 owned by O2 Media, LLC. |
Moncur, W., and Reiter, E. (2007). How Much to Tell? Disseminating Affective Information across a Social Network. Proceedings of Second International Workshop on Personalisation for e-Health. |
Moncur, W., Masthoff, J., Reiter, E. (2008) What Do You Want to Know? Investigating the Information Requirements of Patient Supporters. 21st IEEE International Symposium on Computer-Based Medical Systems (CBMS 2008), pp. 443-448. |
Movie Magic Screenwriter, Write Brothers, 2009, user manual. |
Notice of Allowance for U.S. Appl. No. 16/047,800 dated Feb. 18, 2020. |
Office Action for U.S. Appl. No. 13/464,635 dated Feb. 22, 2016. |
Office Action for U.S. Appl. No. 14/521,264 dated Jun. 24, 2016. |
Office Action for U.S. Appl. No. 14/521,264 dated Nov. 25, 2016. |
Office Action for U.S. Appl. No. 14/570,834 dated Aug. 23, 2016. |
Office Action for U.S. Appl. No. 14/626,966 dated Jan. 25, 2016. |
Office Action for U.S. Appl. No. 14/626,966 dated Jul. 15, 2016. |
Office Action for U.S. Appl. No. 16/919,454 dated Nov. 10, 2021. |
Portet, F., Reiter, E., Gatt, A., Hunter, J., Sripada, S., Freer, Y., and Sykes, C. (2009). Automatic Generation of Textual Summaries from Neonatal Intensive Care Data. Artificial Intelligence. |
Portet, F., Reiter, E., Hunter, J., and Sripada, S. (2007). Automatic Generation of Textual Summaries from Neonatal Intensive Care Data. In: Bellazzi, Riccardo, Ameen Abu-Hanna and Jim Hunter (Ed.), 11th Conference on Artificial Intelligence in Medicine (AIME 07), pp. 227-236. |
Prosecution history for U.S. Appl. No. 14/521,264, now U.S. Pat. No. 9,720,899, filed Oct. 22, 2014. |
Prosecution history for U.S. Appl. No. 14/570,834, now U.S. Pat. No. 9,977,773, filed Dec. 15, 2014. |
Prosecution history for U.S. Appl. No. 14/570,858, now U.S. Pat. No. 9,697,492 filed Dec. 15, 2014. |
Prosecution history for U.S. Appl. No. 14/626,980, now U.S. Pat. No. 9,697,197 filed Feb. 20, 2015. |
Prosecution history for U.S. Appl. No. 15/895,800, now U.S. Pat. No. 10,747,823, filed Feb. 13, 2018. |
Prosecution history for U.S. Appl. No. 15/977,141, now U.S. Pat. No. 10,755,042, filed May 11, 2018. |
Prosecution history for U.S. Appl. No. 16/919,427, now U.S. Pat. No. 11,288,328, filed Jul. 2, 2020. |
Prosecution history for U.S. Appl. No. 16/919,454, filed Jul. 2, 2020. |
Prosecution History for U.S. Appl. No. 12/779,636, now U.S. Pat. No. 8,688,434, filed May 13, 2010. |
Number | Date | Country | |
---|---|---|---|
20230053724 A1 | Feb 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17000516 | Aug 2020 | US |
Child | 17982662 | US | |
Parent | 15977141 | May 2018 | US |
Child | 17000516 | US | |
Parent | 14570834 | Dec 2014 | US |
Child | 15977141 | US | |
Parent | 14521264 | Oct 2014 | US |
Child | 14570834 | US |