The disclosed implementations relate generally to data visualization and more specifically to systems, methods, and user interfaces that enable users to interact with data visualizations and analyze data using natural language expressions.
Data visualization applications enable a user to understand a data set visually. Visual analyses of data sets, including distribution, trends, outliers, and other factors are important to making business decisions. Some data sets are very large or complex, and include many data fields. Various tools can be used to help understand and analyze the data, including dashboards that have multiple data visualizations and natural language interfaces that help with visual analytical tasks.
Natural language interfaces are becoming a useful modality for data exploration. However, supporting natural language interactions with visual analytical systems is often challenging. For example, users tend to type utterances that are linguistically colloquial, underspecified, or ambiguous, while the visual analytics system has more complicated nuances of realizing these utterances against the underlying data and analytical functions. Users also expect high precision and recall from such natural language interfaces. However, unlike web search systems relying on document indexing, visual analytical systems are constrained by the underlying analytical engine and data characteristics. While statistical and machine learning techniques can be employed, manually authoring and tuning a grammar for each new database is difficult and prohibitively expensive.
There is a need for improved systems and methods that support natural language interactions with visual analytical systems. The present disclosure describes a data visualization application that employs a set of inference techniques for handling ambiguity and underspecification of users' utterances, so as to generate useful data visualizations. The data visualization application uses syntactic and semantic constraints imposed by an intermediate language that resolves natural language utterances. The intermediate language resolves the underspecified utterances into formal queries that can be executed against a visual analytics system (e.g., the data visualization application) to generate useful data visualizations. Thus, the intermediate language reduces the cognitive burden on a user and produces a more efficient human-machine interface.
In accordance with some implementations, a method executes at a computing device having a display, one or more processors, and memory storing one or more programs configured for execution by the one or more processors. The method includes displaying a data visualization interface on the display. The method includes receiving user selection of a data source. The method further includes receiving a first user input to specify a natural language command directed to the data source. For example, the user input includes one or more words associated with the data source. The method further includes forming a first intermediate expression according to a context-free grammar and a semantic model of data fields in the data source by parsing the natural language command. When the first intermediate expression omits sufficient information for generating a data visualization, the computing device infers the omitted information associated with the data source using one or more inferencing rules based on syntactic and semantic constraints imposed by the context-free grammar. The computing device forms an updated intermediate expression using the first intermediate expression and the inferred information. The computing device translates the updated intermediate expression into one or more database queries. The computing device executes the one or more database queries to retrieve one or more data sets from the data source. The computing device generates and displaying a data visualization of the retrieved data sets. In some implementations, the first intermediate expression is also known as a partial analytical expression or an underspecified expression. In some implementations, the updated intermediate expression is a fully specified expression.
In some implementations, forming the first intermediate expression includes using one or more pre-defined grammar rules governing the context-free grammar.
In some implementations, the predefined grammar rules include a predefined expression type that is one of: limit, group, aggregation, filter, and sort.
In some instances, the omitted information includes an open variable of the data source. Inferring the omitted information includes assigning a non-logical constant to the open variable, and inferring an analytical concept for the non-logical constant.
In some instances, the analytical concept is one of: field, value, aggregation, group, filter, limit, and sort.
In some instances, inferring the omitted information associated with the data source includes inferring one or more second intermediate expressions. The updated intermediate expression uses the first intermediate expression and the one or more second intermediate expressions.
In some instances, the first intermediate expression is a sort expression, and the one or more second intermediate expressions include a group expression.
In some instances, the one or more second intermediate expressions further include an aggregation expression.
In some instances, the natural language command includes a data visualization type. Generating and displaying the data visualization of the retrieved data sets includes displaying a data visualization having the data visualization type.
In some instances, the data visualization type is one of: a bar chart, a Gantt chart, a line chart, a map, a pie chart, a scatter plot, and a tree map.
In some instances, the omitted information includes an underspecified concept. For example, the omitted information includes one or more vague or ambiguous concepts (e.g., terms) such as “low”, “high”, “good”, “bad”, “near”, and “far.” Inferring the omitted information includes identifying a data field associated with the underspecified concept, and inferring a range of predefined (e.g., default) values associated with the data field. Generating and displaying the data visualization includes generating and displaying the data visualization based on the range of predefined values.
In some instances, the range of predefined values includes one or more of: an average value, a standard deviation, and a maximum value associated with the data field.
In some instances, the method further comprises receiving a second user input modifying the range of predefined values. Responsive to the second user input, the computing device generates and displays an updated data visualization based on the modified range of predefined values.
In some implementations, receiving the user input to specify the natural language command further comprises receiving the user input via a user interface control in the data visualization interface.
In some implementations, after the computing device infers the omitted information, the computing device displays the inferred information as one or more options in the user interface control, each of the one or more options representing an interpretation of the inferred information.
In some implementations, displaying the inferred information as one or more options in the user interface control includes displaying the one or more options in a dropdown menu of the user interface.
In some implementations, the omitted information includes a missing field, and inferring the omitted information includes inferring the missing field based on a popularity score from telemetry usage data. In some implementations, textual fields such as “City” and “State” have a higher popularity score than numerical fields such as “Zip Code”.
In some instances, the natural language command directed to the data source includes a first temporal concept. Inferring the omitted information includes identifying a first temporal hierarchy (e.g., year, month, week, day, hour, minute, and second) associated with the first temporal concept. Inferring the omitted information also includes inferring a second temporal hierarchy (e.g., year, month, week, day, hour, minute, and second) associated with the data source, and retrieving from the data source a plurality of data fields having the second temporal hierarchy. The computing device further generates and displays the data visualization using the plurality of data fields having the second temporal hierarchy
In some implementations, the plurality of data fields having the second temporal hierarchy has a level of detail that is more granular than the level of detail of data fields in the data source having the first temporal hierarchy.
In some implementations, generating and displaying a data visualization further comprises generating and displaying a data visualization having a particular data visualization type based on the inferred information.
In some implementations, a computing device includes one or more processors, memory, a display, and one or more programs stored in the memory. The programs are configured for execution by the one or more processors. The one or more programs include instructions for performing any of the methods described herein.
In some implementations, a non-transitory computer-readable storage medium stores one or more programs configured for execution by a computing device having one or more processors, memory, and a display. The one or more programs include instructions for performing any of the methods described herein.
Thus methods, systems, and graphical user interfaces are disclosed that enable users to easily interact with data visualizations and analyze data using natural language expressions.
For a better understanding of the aforementioned systems, methods, and graphical user interfaces, as well as additional systems, methods, and graphical user interfaces that provide data visualization analytics, reference should be made to the Description of Implementations below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
Reference will now be made to implementations, examples of which are illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that the present invention may be practiced without requiring these specific details.
The various methods and devices disclosed in the present specification improve the effectiveness of natural language interfaces on data visualization platforms by resolving underspecified (e.g., omitted information) or ambiguous (e.g., vague) natural language utterances (e.g., expressions or commands) directed to a data source. The methods and devices leverage syntactic and semantic structure defined by an intermediate language. The intermediate language, also referred to herein as ArkLang, is designed to resolve natural language inputs into formal queries that can be executed against a database. A natural language input is lexically translated into ArkLang. A first intermediate expression of the input is formed in ArkLang. The omitted information associated with the data source is inferred using inferencing rules based on the syntactic and semantic constraints imposed by ArkLang. An updated intermediate expression is formed using the first intermediate expression and the inferred information, and is then translated (e.g., compiled) into a series of instructions employing a visualization query language to issue a query against a data source (e.g., database). The data visualization platform automatically generates and displays a data visualization (or an updated data visualization) of retrieved data sets in response to the natural language input. The visualization query language is a formal language for describing visual representations of data, such as tables, charts, graphs, maps, time series, and tables of visualizations. These different types of visual representations are unified into one framework, coupling query, analysis, and visualization. Thus, the visualization query language facilitates transformation from one visual representation to another (e.g., from a list view to a cross-tab to a chart).
The graphical user interface 100 also includes a data visualization region 112. The data visualization region 112 includes a plurality of shelf regions, such as a columns shelf region 120 and a rows shelf region 122. These are also referred to as the column shelf 120 and the row shelf 122. As illustrated here, the data visualization region 112 also has a large space for displaying a visual graphic (also referred to herein as a data visualization). Because no data elements have been selected yet, the space initially has no visual graphic. In some implementations, the data visualization region 112 has multiple layers that are referred to as sheets. In some implementations, the data visualization region 112 includes a region 126 for data visualization filters.
In some implementations, the graphical user interface 100 also includes a natural language input box 124 (also referred to as a command box) for receiving natural language commands. A user may interact with the command box to provide commands. For example, the user may provide a natural language command by typing the command in the box 124. In addition, the user may indirectly interact with the command box by speaking into a microphone 220 to provide commands. In some implementations, data elements are initially associated with the column shelf 120 and the row shelf 122 (e.g., using drag and drop operations from the schema information region 110 to the column shelf 120 and/or the row shelf 122). After the initial association, the user may use natural language commands (e.g., in the natural language input box 124) to further explore the displayed data visualization. In some instances, a user creates the initial association using the natural language input box 124, which results in one or more data elements being placed on the column shelf 120 and on the row shelf 122. For example, the user may provide a command to create a relationship between a data element X and a data element Y. In response to receiving the command, the column shelf 120 and the row shelf 122 may be populated with the data elements (e.g., the column shelf 120 may be populated with the data element X and the row shelf 122 may be populated with the data element Y, or vice versa).
In some implementations, the memory 206 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM, or other random-access solid-state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. In some implementations, the memory 206 includes one or more storage devices remotely located from the processor(s) 202. The memory 206, or alternatively the non-volatile memory device(s) within the memory 206, includes a non-transitory computer-readable storage medium. In some implementations, the memory 206 or the computer-readable storage medium of the memory 206 stores the following programs, modules, and data structures, or a subset or superset thereof:
In some implementations, the data source lexicon 254 includes other database objects 288 as well.
ArkLang can be generated from a set of semantic models (e.g., the semantic model 248) representing their corresponding database, a context-free grammar (CFG), and a set of semantic constraints. In some implementations, a dialect of ArkLang is a set of all syntactically valid and semantically meaningful analytical expressions that can be generated by fixing a particular semantic model and leveraging the context-free grammar and a fixed set of semantic heuristics.
In some implementations, canonical representations are assigned to the analytical expressions 239 (e.g., by the natural language processing module 238) to address the problem of proliferation of ambiguous syntactic parses inherent to natural language querying. The canonical structures are unambiguous from the point of view of the parser and the natural language processing module 238 is able to choose quickly between multiple syntactic parses to form intermediate expressions.
In some implementations, the computing device 200 also includes other modules such as an autocomplete module, which displays a dropdown menu with a plurality of candidate options when the user starts typing into the input box 124, and an ambiguity module to resolve syntactic and semantic ambiguities between the natural language commands and data fields (not shown). Details of these sub-modules are described in U.S. patent application Ser. No. 16/134,892, titled “Analyzing Natural Language Expressions in a Data Visualization User Interface, filed Sep. 18, 2018, which is incorporated by reference herein in its entirety;
Although
Analytical Expressions
In some implementations, and as described in U.S. patent application Ser. No. 16/166,125, the natural language processing module 238 parses the command “avg price” into the tokens “avg” and “price.” The natural language processing module 238 uses a lexicon (e.g., the first data source lexicon 254) corresponding to the data source 310 and identifies that the token “avg” is a synonym of the word “average”. The natural language processing module 238 further identifies that the term “average” specifies an aggregation type, and the token “price” specifies the data field to be aggregated. The user interface 100 returns (e.g., displays) one or more interpretations (e.g., intermediate expressions) 404 for the natural language command. In this example, the interface 100 displays “average Price” in a dropdown menu 406 of the graphical user interface. In some implementations and as illustrated in
In some implementations, as illustrated in
Intra-Phrasal Inferencing
Intra-phrasal inferencing relies on constraints imposed by the syntactic and semantic structure of underspecified expressions. In some implementations, for each of the analytical expressions 239, a finite set of variables of that type (e.g., aggregation, group, filter, limit, sort) is assumed. For example, for the group expression, the variables are g1, . . . , gn for n≤ω. An expression is underspecified if the expression contains at least one free variable. For example, an underspecified aggregation expression is of the form [average, x], where x is a Field variable. While the aggregation, (“average”), in this expression is defined, its Field is not—it is the free variable x. Similarly, [sales, at least, y], is an underspecified filter expression where y is a Value variable.
Intra-phrasal inferencing is the process of instantiating an open variable in an intermediate expression with a non-logical constant of that type. In some implementations, intra-phrasal inferencing is referred to as the function Intra and is defined, in part, as follows:
In some implementations, responsive to the interpretation 504, the user may navigate a cursor 505 to the data field “Vintage” in the schema information region 110 to find out more information about inferred data field “Vintage,” as illustrated in
In response to user selection of the first option 514-1 “by Winery with Vintage in 2015,” and as illustrated in
In some implementations, the inferencing choices generated by the inferencing module 241 can be overridden by the user via a repair and refinement operation.
Inter-Phrasal Inferencing
In some implementations, given a fully specified analytical expression of ArkLang, additional fully specified analytical expression are inferred (e.g., by the inferencing module 241) either because (i) the underlying visualization query language that ArkLang is translated (e.g., compiled) into requires such additional expressions to be co-present for purposes of query specification or (ii) such additional expressions improve the analytical usefulness of the resultant visualization.
With respect to (i), the visual specification for the visualization query language may require measure fields to be aggregated or require dimension fields to group the data into panes to generate a visualization. Therefore, filter and limit expressions require aggregated measures and grouped dimensions in play to select subsets of the data for analysis. A sort expression has a stricter constraint that requires the dimension that is being sorted to also be used to group the data in the visualization.
With respect to (ii), when a user types “temperature throughout July 2018” (illustration not shown), the user likely expects the result to be a time-series visualization of the filter expression, to reveal the temporal aspects of the data. Arklang supports the notion of level of detail in data hierarchies such as location and time. In order to generate a time-series line chart, the inferencing module 241 introspects the current level of detail of a temporal hierarchy in the filter expression, and infers a group expression of the temporal concept to be one level lower than the original temporal concept in the filter expression. An exception is the time unit “second”, which is the lowest level of the temporal hierarchy. In this instance, the inferencing module 241 simply infers “second”.
In some implementations, the inferencing module 241 also supports (e.g., interprets) terse expressions. As illustrated in
In response to user selection of the first option 620 “by Taster name”, and as shown in
In some implementations, the inferencing module 241 infers a default aggregation expression “SUM(NumberOfRecords)” when a user does not specify an aggregation expression. In some implementations, “Number of Records” is an automatically generated calculated field in data visualization application 230 that contains value 1, associated with each record in the database (e.g., the data source 310 and the database/data sources 242).
User Specification of a Data Visualization Type
During visual analysis, users may explicitly express their intent for a specific graphical representation. For example, a user may specify a line chart to perform temporal analysis. In some implementations, the inferencing logic for deducing sensible attributes to satisfy valid visualizations relies on an integrated set of rules and defaults (also referred herein as Show Me). Show Me incorporates automatic presentation from the row and column structure of a data visualization query expression. In some implementations, Show Me also adopts best practices from graphics design and information visualization when ranking analytically useful visualizations based on the type of attributes utilized in the analysis workflow. Many features of Show Me are described in U.S. Pat. No. 8,099,674, entitled “Computer Systems and Methods for Automatically Viewing Multidimensional Databases,” which is incorporated by reference herein in its entirety.
In some implementations, the data visualization application 230 assigns different ranks to different data visualization types. A higher rank is assigned to a data visualization that presents views that encode data graphically. Text tables are assigned the lowest rank because their primary utility is to look up specific values and they do not encode data graphically. In some implementations, text tables are always available as a default visualization, as no attribute needs to be inferred to display a text table.
In some implementations, the data visualization application supports the following visualizations and enumerates their corresponding inferencing logic when a user explicitly asks for these chart types in their input utterances (e.g., natural language commands):
In
Resolving Vague Predicates
Vagueness is a term used in linguistics manifested by concepts such as “low,” “high,” “good,” and “near.” These concepts are termed as “vague” and/or “ambiguous” because of the inability to precisely determine and generalize the extensions of such concepts in certain domains and contexts.
In some implementations, using metadata provided by the Semantic Model, the inferencing logic is extended to make smart defaults for such concepts in ArkLang. For example, for an utterance “where are the expensive wines?”, the application infers (e.g., using the inferencing module 241) the vague concept “expensive” to range from [avg+1SD, max], where avg, SD and max are the average, standard deviation and maximum values for the numerical field “Price” that also has metadata indicating that it is a currency attribute. In some implementations, telemetry data about system overrides and interaction data that provides a feedback loop to the system regarding relevancy in the inference logic are also collected to improve the inferencing logic.
In response to the user modification, and as illustrated in
Flowchart
In some implementations, an intermediate language (also referred to as ArkLang) facilitates the process of issuing natural language queries to a generic database. In some implementations, the translation from a natural language input to visualization query language (VizQL) commands for generating a visualization response uses the following algorithm:
The method 900 is performed (904) at a computing device 200 that has (904) a display 212, one or more processors 202, and memory 206. The memory 206 stores (906) one or more programs configured for execution by the one or more processors 202. In some implementations, the operations shown in
The computing device 200 displays (908) a data visualization interface 100 on the display 212.
The computing device 200 receives (910) user selection of a data source. For example, the computing device receives user selection of the data source 310 as illustrated in
The computing device 200 receives (912) a first user input to specify a natural language command directed to the data source (e.g., the database or data sources 242 or the data source 310). In some implementations, the user input includes one or more fields associated with the data source. For example, referring to
In some implementations, the computing device 200 receives (916) the user input via a user-interface control in the data visualization interface 100. For example, the computing device receives the user input via the command box 124 of the graphical user interface 100. In some implementations, the user may enter (e.g., type in) the user input. In some implementations, the user input is a voice utterance captured by the audio input device 220.
The computing device 200 forms (918) a first intermediate expression (e.g., using the natural language processing module 238) according to a context-free grammar and a semantic model 248 of data fields in the data source by parsing the natural language command.
In some implementations, a parsing algorithm Cocke-Kasami-Younger (CKY) is used for parsing the natural language command. The CKY algorithm employs bottom-up parsing and dynamic programming on a context-free grammar. The input to the underlying CKY parser is this context-free grammar with production rules augmented with both syntactic and semantic predicates based on analytical expressions that correspond to basic database operations found in the database query's underlying analytical functionality.
In some implementations, the computing device 200 forms (920) the first intermediate expression using one or more pre-defined grammar rules governing the context-free grammar. In some implementations, the predefined grammar rules are specified in Backus-Naur Form.
In some implementations, the predefined grammar rules include (922) a predefined expression type that is one of: limit, group, aggregation, filter, and sort.
In accordance with a determination (924) that the first intermediate expression omits sufficient information for generating a data visualization, the computing device 200 infers (926) the omitted information associated with the data source using one or more inferencing rules based on syntactic and semantic constraints imposed by the context-free grammar. In some implementations, the first intermediate expression is also known as a partial analytical expression or an underspecified expression.
In some implementations, the omitted information includes (928) an open variable of the data source. The computing device 200 assigns (930) a non-logical constant to the open variable, and infers an analytical concept for the non-logical constant. In other words, the non-logical constant only has meaning or semantic content when one is assigned to it by means of an interpretation. As illustrated in
In some implementations, the analytical concept is (932) one of: field, value, aggregation, group, filter, limit, and sort. For example, in
The computing device 200 forms (956) an updated intermediate expression using the first intermediate expression and the inferred information. In other words, the updated intermediate expression are the syntactically viable expressions of the context-free grammar. In some implementations, the updated intermediate expression is also known as a fully specified expression.
In some implementations, inferring the omitted information includes (934) inferring one or more second intermediate expressions. The updated intermediate expression uses (958) the first intermediate expression and the one or more second intermediate expressions. For example, in
In some implementations, the first intermediate expression is (960) a sort expression, and the one or more second intermediate expressions include a group expression.
In some implementations, the one or more second expressions further include (962) an aggregation expression. In some implementations and instances, the computing device 200 infers a default aggregation expression (e.g., “SUM(NumberOfRecords)”) when a user does not specify an aggregation expression.
The computing device 200 translates (964) the updated intermediate expression into one or more database queries.
The computing device 200 executes (966) the one or more database queries to retrieve one or more data sets from the data source.
The computing device 200 generates (968) and displays a data visualization of the retrieved data sets.
In some implementations, the natural language command includes (914) a data visualization type, as illustrated in
In some implementations, the data visualization type is (915) one of: a bar chart, a Gantt chart, a line chart, a map, a pie chart, a scatter plot, and a tree map.
In some implementations, the omitted information includes (936) an underspecified concept. For example, the omitted information includes one or more vague or ambiguous concepts (or terms) such as “high,” “low,” “good,” “bad,” “near,” and “far.” Inferring the omitted information includes identifying (938) a data field associated with the underspecified concept, and inferring a range of predefined (e.g., default) values associated with the data field. The generated and displayed data visualization is (972) based on the range of predefined values. This is further illustrated in
In some implementations, the range of predefined values includes (940) one or more of an average value, a standard deviation, and a maximum value associated with the data field.
In some implementations, the method 900 further comprises receiving (974) a second user input modifying the range of predefined values (e.g., using a slider affordance or by entering the desired values, as illustrated in
In some implementations, after the computing device 200 infers the omitted information, the computing device 200 displays (982) the inferred information as one or more options in the user interface control, each of the one or more options representing an interpretation of the inferred information.
In some implementations, the one or more options are (984) displayed in a dropdown menu (e.g., the dropdown menu 406) of the user interface.
In some implementations, the omitted information includes (942) a missing field, and inferring the omitted information includes inferring (944) the missing field based on a popularity score from telemetry usage data. For example, a field that is referenced more often will be assigned a higher popularity score. In some implementations, the popularity score is based on a set of heuristics governing principles of information visualization. For example, geo fields such as “Postal Code” have a lower popularity score because they tend to be less salient than other geographic counterparts such as “City” or “State.” When inferring a time concept in an utterance such as “show me orders 2015,” relative time concepts (e.g. “last 2015 years”) tend to be less salient than absolute time concepts (e.g. “in the year 2015”).
In some implementations, the natural language command includes (946) a first temporal concept. Inferring the omitted information includes identifying (948) a first temporal hierarchy associated with the first temporal concept. Inferring the omitted information also includes (950) inferring a second temporal hierarchy associated with the data source. The computing device 200 retrieves (952) from the data source a plurality of data fields having the second temporal hierarchy. The computing device 200 generates (978) and displays the data visualization using the plurality of data fields having the second temporal hierarchy. For example, in response to the command “Show me sum of sales in July 2018,” the computing device infers the first temporal hierarchy “month” and infers the second temporal hierarchy “week.” The computing device 200 generates and displays a data visualization using sales data by week.
In some implementations, the plurality of data fields having the second temporal hierarchy has (954) a level of detail that is more granular than the level of detail of data fields in the data source having the first temporal hierarchy.
In some implementations, generating and displaying a data visualization further comprises generating (980) and displaying a data visualization having a particular data visualization type based on the inferred information. For example, in response to the natural language command that includes the term “correlate,” the computing device 200 infers a scatter plot, as illustrated in
Each of the above identified executable modules, applications, or sets of procedures may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. In some implementations, the memory 214 stores a subset of the modules and data structures identified above. Furthermore, the memory 214 may store additional modules or data structures not described above.
The terminology used in the description of the invention herein is for the purpose of describing particular implementations only and is not intended to be limiting of the invention. As used in the description of the invention and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof.
The foregoing description, for purpose of explanation, has been described with reference to specific implementations. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The implementations were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various implementations with various modifications as are suited to the particular use contemplated.
This application is a continuation of U.S. patent application Ser. No. 16/234,470, filed Dec. 27, 2018, entitled “Analyzing Underspecified Natural Language Utterances in a Data Visualization User Interface,” which claims priority to U.S. Provisional Patent Application No. 62/742,857, filed Oct. 8, 2018, entitled “Inferencing Underspecified Natural Language Utterances in Visual Analysis,” each of which is incorporated by reference herein in its entirety. This application is related to U.S. patent application Ser. No. 16/166,125, filed Oct. 21, 2018, titled “Determining Levels of Detail for Data Visualizations Using Natural Language Constructs,” U.S. patent application Ser. No. 16/134,892, filed Sep. 18, 2018, titled “Analyzing Natural Language Expressions in a Data Visualization User Interface,” U.S. patent application Ser. No. 15/486,265, filed Apr. 12, 2017, titled, “Systems and Methods of Using Natural Language Processing for Visual Analysis of a Data Set,” and U.S. patent application Ser. No. 14/801,750, filed Jul. 16, 2015, titled “Systems and Methods for using Multiple Aggregation Levels in a Single Data Visualization,” each of which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4800810 | Masumoto | Jan 1989 | A |
5036314 | Barillari et al. | Jul 1991 | A |
5060980 | Johnson et al. | Oct 1991 | A |
5144452 | Abuyama | Sep 1992 | A |
5169713 | Kumurdjian | Dec 1992 | A |
5265244 | Ghosh et al. | Nov 1993 | A |
5265246 | Li et al. | Nov 1993 | A |
5377348 | Lau et al. | Dec 1994 | A |
5383029 | Kojima | Jan 1995 | A |
5560007 | Thai | Sep 1996 | A |
5577241 | Spencer | Nov 1996 | A |
5581677 | Myers et al. | Dec 1996 | A |
5664172 | Antoshenkov | Sep 1997 | A |
5664182 | Nierenberg et al. | Sep 1997 | A |
5668987 | Schneider | Sep 1997 | A |
5794246 | Sankaran et al. | Aug 1998 | A |
5864856 | Young | Jan 1999 | A |
5893088 | Hendricks et al. | Apr 1999 | A |
5913205 | Jain et al. | Jun 1999 | A |
5933830 | Williams | Aug 1999 | A |
6031632 | Yoshihara et al. | Feb 2000 | A |
6032158 | Mukhopadhyay et al. | Feb 2000 | A |
6044374 | Nesamoney et al. | Mar 2000 | A |
6100901 | Mohda et al. | Aug 2000 | A |
6115744 | Robins et al. | Sep 2000 | A |
6154766 | Yost et al. | Nov 2000 | A |
6173310 | Yost et al. | Jan 2001 | B1 |
6188403 | Sacerdoti et al. | Feb 2001 | B1 |
6208990 | Suresh et al. | Mar 2001 | B1 |
6222540 | Sacerdoti et al. | Apr 2001 | B1 |
6247008 | Cambot et al. | Jun 2001 | B1 |
6253257 | Dundon | Jun 2001 | B1 |
6260050 | Yost et al. | Jul 2001 | B1 |
6269393 | Yost et al. | Jul 2001 | B1 |
6300957 | Rao et al. | Oct 2001 | B1 |
6301579 | Becker | Oct 2001 | B1 |
6317750 | Tortolani et al. | Nov 2001 | B1 |
6327628 | Anuff et al. | Dec 2001 | B1 |
6339775 | Zamanian et al. | Jan 2002 | B1 |
6377259 | Tenev et al. | Apr 2002 | B2 |
6397195 | Pinard et al. | May 2002 | B1 |
6400366 | Davies et al. | Jun 2002 | B1 |
6405195 | Ahlberg | Jun 2002 | B1 |
6405208 | Raghavan et al. | Jun 2002 | B1 |
6424933 | Agrawala et al. | Jul 2002 | B1 |
6490593 | Proctor | Dec 2002 | B2 |
6492989 | Wilkinson | Dec 2002 | B1 |
6522342 | Gagnon et al. | Feb 2003 | B1 |
6611825 | Billheimer et al. | Aug 2003 | B1 |
6643646 | Su et al. | Nov 2003 | B2 |
6707454 | Barg et al. | Mar 2004 | B1 |
6714897 | Whitney et al. | Mar 2004 | B2 |
6725230 | Ruth et al. | Apr 2004 | B2 |
6750864 | Anwar | Jun 2004 | B1 |
6768986 | Cras et al. | Jul 2004 | B2 |
6906717 | Couckuyt et al. | Jun 2005 | B2 |
7009609 | Miyadai | Mar 2006 | B2 |
7023453 | Wilkinson | Apr 2006 | B2 |
7089266 | Stolte et al. | Aug 2006 | B2 |
7117058 | Lin et al. | Oct 2006 | B2 |
7176924 | Wilkinson | Feb 2007 | B2 |
7290007 | Farber et al. | Oct 2007 | B2 |
7302383 | Valles | Nov 2007 | B2 |
7315305 | Crotty et al. | Jan 2008 | B2 |
7379601 | Yang et al. | May 2008 | B2 |
7426520 | Gorelik et al. | Sep 2008 | B2 |
7603267 | Wang | Oct 2009 | B2 |
7716173 | Stolte et al. | May 2010 | B2 |
7882144 | Stolte et al. | Feb 2011 | B1 |
8082243 | Gorelik et al. | Dec 2011 | B2 |
8140586 | Stolte et al. | Mar 2012 | B2 |
8442999 | Gorelik et al. | May 2013 | B2 |
8473521 | Fot et al. | Jun 2013 | B2 |
8620937 | Jonas | Dec 2013 | B2 |
8713072 | Stotle et al. | Apr 2014 | B2 |
8751505 | Carmel et al. | Jun 2014 | B2 |
8874613 | Gorelik et al. | Oct 2014 | B2 |
8972457 | Stolte et al. | Mar 2015 | B2 |
9183235 | Stolte et al. | Nov 2015 | B2 |
9299173 | Rope | Mar 2016 | B2 |
9336253 | Gorelik et al. | May 2016 | B2 |
9501585 | Gautam | Nov 2016 | B1 |
9633091 | Stolte et al. | Apr 2017 | B2 |
9665662 | Gautam et al. | May 2017 | B1 |
9818211 | Gibb | Nov 2017 | B1 |
9858292 | Setlur | Jan 2018 | B1 |
9947314 | Cao et al. | Apr 2018 | B2 |
9983849 | Weingartner | May 2018 | B2 |
10042517 | Stolte et al. | Aug 2018 | B2 |
10042901 | Stolte et al. | Aug 2018 | B2 |
10331720 | Neels | Jun 2019 | B2 |
10418032 | Mohajer | Sep 2019 | B1 |
10515121 | Setlur | Dec 2019 | B1 |
10546001 | Nguyen | Jan 2020 | B1 |
10546003 | Gupta | Jan 2020 | B2 |
10564622 | Dean | Feb 2020 | B1 |
10817527 | Setlur | Oct 2020 | B1 |
10956655 | Choe | Mar 2021 | B2 |
11080336 | Van Dusen | Aug 2021 | B2 |
11114189 | Prosky | Sep 2021 | B2 |
11720240 | Setlur | Aug 2023 | B1 |
20010013036 | Judicibus | Aug 2001 | A1 |
20020002325 | Lliff | Jan 2002 | A1 |
20020059204 | Harris | May 2002 | A1 |
20020118192 | Couckuyt et al. | Aug 2002 | A1 |
20020123865 | Whitney et al. | Sep 2002 | A1 |
20020135610 | Ootani et al. | Sep 2002 | A1 |
20020154118 | McCarthy et al. | Oct 2002 | A1 |
20030200034 | Fellenberg et al. | Oct 2003 | A1 |
20040148170 | Acero et al. | Jul 2004 | A1 |
20040183800 | Peterson | Sep 2004 | A1 |
20040227759 | McKnight et al. | Nov 2004 | A1 |
20040243593 | Stolte et al. | Dec 2004 | A1 |
20050035966 | Pasquarette et al. | Feb 2005 | A1 |
20050035967 | Joffrain et al. | Feb 2005 | A1 |
20050060300 | Stolte et al. | Mar 2005 | A1 |
20050099423 | Brauss | May 2005 | A1 |
20060129913 | Vigesaa et al. | Jun 2006 | A1 |
20060136825 | Cory et al. | Jun 2006 | A1 |
20060206512 | Hanrahan et al. | Sep 2006 | A1 |
20070061344 | Dickerman et al. | Mar 2007 | A1 |
20070061611 | MacKinlay et al. | Mar 2007 | A1 |
20070129936 | Wang | Jun 2007 | A1 |
20080016026 | Farber et al. | Jan 2008 | A1 |
20090313576 | Neumann | Dec 2009 | A1 |
20110119047 | Ylonen | May 2011 | A1 |
20110184718 | Chen | Jul 2011 | A1 |
20120323948 | Li | Dec 2012 | A1 |
20130249917 | Fanning | Sep 2013 | A1 |
20140164362 | Syed et al. | Jun 2014 | A1 |
20140236579 | Kurz | Aug 2014 | A1 |
20160078354 | Petri et al. | Mar 2016 | A1 |
20160092090 | Stojanovic | Mar 2016 | A1 |
20160171050 | Das | Jun 2016 | A1 |
20160179908 | Johnston et al. | Jun 2016 | A1 |
20170011023 | Ghannam et al. | Jan 2017 | A1 |
20170091277 | Zoch | Mar 2017 | A1 |
20170091902 | Bostik et al. | Mar 2017 | A1 |
20170118308 | Vigeant | Apr 2017 | A1 |
20170154089 | Sherman | Jun 2017 | A1 |
20170308571 | McCurley | Oct 2017 | A1 |
20180032576 | Romero | Feb 2018 | A1 |
20180039614 | Govindarajulu | Feb 2018 | A1 |
20180129941 | Gustafson | May 2018 | A1 |
20180137424 | Gabaldon Royval | May 2018 | A1 |
20180158245 | Govindan | Jun 2018 | A1 |
20180203924 | Agrawal | Jul 2018 | A1 |
20180210883 | Ang | Jul 2018 | A1 |
20180329987 | Tata | Nov 2018 | A1 |
20190042634 | Stolte et al. | Feb 2019 | A1 |
20190102390 | Antunes et al. | Apr 2019 | A1 |
20190108171 | Stolte et al. | Apr 2019 | A1 |
20190115016 | Seok et al. | Apr 2019 | A1 |
20190120649 | Seok et al. | Apr 2019 | A1 |
20190121801 | Jethwa | Apr 2019 | A1 |
20190138648 | Gupta | May 2019 | A1 |
20190163807 | Jain et al. | May 2019 | A1 |
20190179607 | Thangarathnam et al. | Jun 2019 | A1 |
20190197605 | Sadler | Jun 2019 | A1 |
20190236144 | Hou | Aug 2019 | A1 |
20190272296 | Prakash et al. | Sep 2019 | A1 |
20190311717 | Kim et al. | Oct 2019 | A1 |
20190349321 | Cai et al. | Nov 2019 | A1 |
20190384815 | Patel | Dec 2019 | A1 |
20200065385 | Dreher | Feb 2020 | A1 |
20200089700 | Ericson | Mar 2020 | A1 |
20200089760 | Ericson | Mar 2020 | A1 |
20200090189 | Tutuk et al. | Mar 2020 | A1 |
20200104402 | Burnett et al. | Apr 2020 | A1 |
20200110803 | Djalali | Apr 2020 | A1 |
20200134103 | Mankovskii | Apr 2020 | A1 |
20200327432 | Doebelin et al. | Oct 2020 | A1 |
20230134235 | Setlur | May 2023 | A1 |
Number | Date | Country |
---|---|---|
215657 | Jan 1994 | HU |
WO 2006060773 | Jun 2006 | WO |
Entry |
---|
Setlur V, Tory M, Djalali A. Inferencing underspecified natural language utterances in visual analysis. InProceedings of the 24th International Conference on Intelligent User Interfaces Mar. 17, 2019 (pp. 40-51). (Year: 2019). |
Becker, Trellis Graphics Displays: A Multi-dimensional Data Visualization Tool for Data Mining, Aug. 1997, 13 pgs. |
Becker, Visualizing Decision Table Classifiers, 1998, 4 pgs. |
Beers, Office Action, U.S. Appl. No. 11/787,761, dated Jun. 12, 2008, 12 pgs. |
Beers, Office Action, U.S. Appl. No. 11/787,761, dated Dec. 17, 2008, 13 pgs. |
Bosch, Performance Analysis and Visualization of Parallel Systems Using SimOS and Rivet: A Case Study , Jan. 2000, 13 pgs. |
Bosch, Rivet: A Flexible Environment for Computer Systems Visualization, Feb. 2000, 9 pgs. |
Brunk, MineSet: An Integrated System for Data Mining, 1997, 4 pgs. |
Derthick, An Interactive Visual Query Environment for Exploring Data, 1997, 11 pgs. |
Freeze, Unlocking OLAP with Microsoft SQL Server and Excel 2000, 2000, 220 pgs. |
Fua, “Hierarchical Parallel Coordinates for Exploration of Large Datasets,” IEEE 1999, pp. 43-50 (Year: 1999). |
Eser Kandogan, “Star Coordinates: A Multi-dimensional Visualization Technique with Uniform Treatment of Dimensions,” www.citeseerx.st.psu.edu, pp. 1-4, 2000 (YearL 2000). |
Fua, Navigating Hierarchies with Structure-Based Brushes, 1999, 7 pgs. |
Gao, Tong, et al. “Datatone: Managing ambiguity in natural language interfaces for data visualization.” Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology. Nov. 2015, pp. 489-500. (Year: 2015). |
Goldstein, A Framework for Knowledge-Based Interactive Data Exploration, Dec. 1994, 30 pgs. |
Gray, Data Cube: A Relational Aggregation Operator Generalizing Group-By, 1997, 24 pgs. |
Hanrahan, Office Action, U.S. Appl. No. 11/005,652, dated Feb. 20, 2009, 11 pgs. |
Hanrahan, Office Action, U.S. Appl. No. 11/005,652, dated Jul. 24, 2008, 11 pgs. |
Hanrahan, Office Action, U.S. Appl. No. 11/005,652, dated Dec. 27, 2007, 11 pgs. |
Hanrahan, Specification, U.S. Appl. No. 11/005,652, filed Dec. 2, 2004, 104 pgs. |
Healey, On the Use of Perceptual Cues and Data Mining for Effective Visualization of Scientific Datasets, 1998, 8 pgs. |
Hearst, Office Action, U.S. Appl. No. 16/601,413, dated Nov. 3, 2020, 17 pgs. |
Hearst, Notice of Allowance, U.S. Appl. No. 16/601,413, dated Mar. 3, 2021, 10 pgs. |
HU Search Report, HU P0700460, dated Oct. 9, 2007, 1 pg. |
John V. Carlis and Joseph A. Konstan, Interactive Visulaization of Serial Periodic Data, www.Courses.ischool.berkeley.edu, pp. 1-10, 1998 (Year: 1998). |
Joseph, Office Action, U.S. Appl. No. 13/734,694, dated Aug. 18, 2014, 46 pgs. |
Keim, VisDB: Datatbase Exploration Using Multidimensional Visualization, Aug. 1994, 27 pgs. |
Kohavi, Data Mining and Visualization, 2000, 8 pgs. |
Livney, M. et al., “DEVise: Integraed Querying and Visual Exploration of Large Datasets,” ACM, 1997, pp. 301-312, (Year: 1997). |
MacDonald, Creating Basic Charts, 2006, 46 pgs. |
MacKinlay, Automating the Design of Graphical Presentations of Relational Information, 1986, 34 pgs. |
MacKinlay, Office Action, U.S. Appl. No. 11/223,658, dated May 21, 2008, 20 pgs. |
MacKinlay, Office Action, U.S. Appl. No. 11/223,658, dated Feb. 23, 2009, 19 pgs. |
MacKinlay, Specification, U.S. Appl. No. 11/223,658, filed Sep. 9, 2005, 58 pgs. |
Matsushita, Mitsunori, Eisaku Maeda, and Tsuneaki Kato. “An interactive visualization method of numerical data based on natural language requirements.” International journal of human-computer studies 60.4, Apr. 2004, pp. 469-488. (Year: 2004). |
Perlin, An Alternative Approach to the Computer Interface, 1993, 11 pgs. |
Popescu, et al. “Towards a theory of natural language interfaces to databases.” Proceedings of the 8th international conference on Intelligent user interfaces. Jan. 2003, pp. 149-157. (Year: 2003). |
Rao, The Table Lens: Merging Graphical and Symbolic Representation in an Interactive Focus+Context Visualization for Tabular Information, Apr. 1994, 7 pgs. |
Roth, Interactive Graphic Design Using Automatic Presentation Knowledge, Apr. 24-28, 1994, 7 pgs. |
Roth, Visage: A User Interface Environment for Exploring Information, Oct. 28-29, 2006, 9 pgs. |
Screen Dumps for Microsoft Office Excel 2003 SP2, figures 1-36, 2003, pp. 1-19. |
Setlur, Preinterview First Office Action, U.S. Appl. No. 16/234,470, dated Sep. 24, 2020, 6 pgs. |
Setlur, First Action Interview Office Action, U.S. Appl. No. 16/234,470, dated Oct. 28, 2020, 4 pgs. |
Setlur, Final Office Action, U.S. Appl. No. 16/234,470, dated Jun. 2, 2021, 49 pgs. |
Setlur, Notice of Allowance, U.S. Appl. No. 16/234,470, dated Nov. 10, 2021, 14 pgs. |
Spenke, Focus: The Interactive Table for Product Comparison and Selection, Nov. 1996, 10 pgs. |
Stevens, On the Theory of Scales of Measurement, Jun. 7, 1946, 4 pgs. |
Stolte, Multiscale Visualization Using Data Cubes, 2002, 8 pgs. |
Stolte, Notice of Allowance, U.S. Appl. No. 10/453,834, dated Mar. 27, 2006, 9 pgs. |
Stolte, Notice of Allowance, U.S. Appl. No. 11/488,407, dated Dec. 29, 1999, 8 pgs. |
Stolte, Notice of Allowance, U.S. Appl. No. 13/019,227, dated Nov. 10, 2011, 8 pgs. |
Stolte, Notice of Allowance, U.S. Appl. No. 13/425,300, dated Dec. 10, 2013, 10 pgs. |
Stolte, Office Action, U.S. Appl. No. 10/667,194, dated Jan. 7, 2008, 10 pgs. |
Stolte, Office Action, U.S. Appl. No. 10/667,194, dated Feb. 9, 2009, 11 pgs. |
Stolte, Office Action, U.S. Appl. No. 10/667,194, dated Aug. 14, 2007, 16 pgs. |
Stolte, Office Action, U.S. Appl. No. 10/667,194, dated Aug. 14, 2008, 10 pgs. |
Stolte, Office Action, U.S. Appl. No. 10/667,194, dated Jan. 18, 2007, 15 pgs. |
Stolte, Office Action, U.S. Appl. No. 10/667,194, dated Jun. 26, 2006, 13 pgs. |
Stolte, Office Action, U.S. Appl. No. 11/488,407, dated Apr. 3, 2009, 6 pgs. |
Stolte, Office Action, U.S. Appl. No. 13/019,227, dated Apr. 18, 2011, 9 pgs. |
Stolte, Office Action, U.S. Appl. No. 13/425,300, dated Mar. 15, 2013, 7 pgs. |
Stolte, Office Action, U.S. Appl. No. 14/937,836, dated Oct. 7, 2016, 10 pgs. |
Stolte, Notice of Allowance, U.S. Appl. No. 14/937,836, dated Mar. 1, 2017, 8 pgs. |
Stolte, Office Action, U.S. Appl. No. 15/449,844, dated Jun. 29, 2017, 16 pgs. |
Stolte, Final Office Action, U.S. Appl. No. 15/449,844, dated Feb. 6, 2018, 8 pgs. |
Stolte, Notice of Allowance, U.S. Appl. No. 15/449,844, dated May 18, 2018, 9 pgs. |
Stolte, Office Action, U.S. Appl. No. 15/582,478, dated Jul. 11, 2017, 16 pgs. |
Stolte, Final Office Action, U.S. Appl. No. 15/582,478, dated Mar. 8, 2018, 10 pgs. |
Stolte, Notice of Allowance U.S. Appl. No. 15/582,478, dated Jun. 26, 2017, 10 pgs. |
Stolte, Notice of Allowance U.S. Appl. No. 16/056,396, dated Apr. 16, 2019, 10 pgs. |
Stolte, Office Action, U.S. Appl. No. 16/056,819, dated Aug. 7, 2019, 12 pgs. |
Stolte, Notice of Allowance, U.S. Appl. No. 16/056,819, dated Sep. 11, 2019, 8 pgs. |
Stolte Office Action, U.S. Appl. No. 16/220,240, dated Aug. 7, 2019, 11 pgs. |
Stolte, Notice of Allowance, U.S. Appl. No. 16/220,240, dated Sep. 11, 2019, 8 pgs. |
Stolte Notice of Allowance, U.S. Appl. No. 16/137,457, dated Sep. 6, 2019, 10 pgs. |
Stolte Notice of Allowance, U.S. Appl. No. 16/137,071, dated Sep. 11, 2019, 10 pgs. |
Stolte, Polaris: A System for Query, Analysis, and Visualization of Multidimensional. Relational Databases, Jan. 2002, 14 pgs. |
Stolte, Query Analysis, and Visualization of Hierarchically Structured Data Using Polaris, Jul. 2002, 11 pgs. |
Stolte, Specification, U.S. Appl. No. 10/453,834, filed Jun. 2, 2003, 114 pgs. |
Stolte, Visualizing Application Behavior on Superscaler Processors, 1999, 9 pgs. |
Tableau Software, IPRP, PCT/US2005/043937, Jun. 5, 2007, 9 pgs. |
Tableau Software, IPRP, PCT/US2007/009810, Oct. 22, 2008, 7 pgs. |
Tableau Software, ISR/WO, PCT/US2005/043937, Apr. 18, 2007, 9 pgs. |
Tableau Software, ISR/WO, PCT/US2006/35300, Jul. 7, 2008, 6 pgs. |
Tableau Software, ISR/WO, PCT/US2007/009810, Jul. 7, 2008, 8 pgs. |
Tableau Software, Inc., International Search Report and Written Opinion, PCT/US2019/055169, dated Dec. 16, 2019, 12 pgs. |
The Board of Trustees..Stanford, IPRP, PCT/US04/18217, Oct. 19, 2006, 4 pgs. |
The Board of Trustees..Stanford, IPRP, PCT/US2004/30396, Jan. 30, 2007, 3 pgs. |
The Board of Trustees..Stanford, ISR/WO, PCT/US04/18217, Feb. 7, 2006, 6 pgs. |
The Board of Trustees..Stanford, ISR/WO, PCT/US2004/30396, Aug. 24, 2006, 5 pgs. |
The Board of Trustees..Stanford, Supplementary ESR, EP 04754739.3, Dec. 17, 2007, 4 pgs. |
Thearling, Visualizing Data Mining Models, 2001, 14 pgs. |
Tory, First Action Preinterview Office Action, U.S. Appl. No. 16/219,406, dated Jul. 10, 2020, 7 pgs. |
Tory, Notice of Allowance U.S. Appl. No. 16/219,406, dated Sep. 9, 2020, 8 pgs. |
Tory, Office Action, U.S. Appl. No. 16/575,354, dated Nov. 3, 2020, 17 pgs. |
Tory, Office Action, U.S. Appl. No. 16/575,354, dated Sep. 20, 2021, 21 pgs. |
Tory, Office Action, U.S. Appl. No. 16/575,349, dated Oct. 13, 2020, 15 pgs. |
Tory, Notice of Allowance, U.S. Appl. No. 16/575,349, dated Feb. 3, 2021, 9 pgs. |
Ward, XmdvTool: Integrating Multiple Methods for Visualizing Multi-Variate Data, 9 pgs. |
Welling, Visualization of Large Multi-Dimensional Datasets, Aug. 11, 2000, 6 pgs. |
Wilkinson, nViZn: An Algebra-Based Visualization System, Mar. 21-23, 2001, 7 pgs. |
Wilkinson, Statistics and Computing—The Grammar of Graphics, 1999, 417 pgs. |
Number | Date | Country | |
---|---|---|---|
20220164540 A1 | May 2022 | US |
Number | Date | Country | |
---|---|---|---|
62742857 | Oct 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16234470 | Dec 2018 | US |
Child | 17667474 | US |