The disclosed implementations relate generally to data visualization and more specifically to systems and methods that facilitate visualizing object models of a data source.
Data visualization applications enable a user to understand a data set visually, including distribution, trends, outliers, and other factors that are important to making business decisions. Some data visualization applications provide a user interface that enables users to build visualizations from a data source by selecting data fields and placing them into specific user interface regions to indirectly define a data visualization. However, when there are complex data sources and/or multiple data sources, it may be unclear what type of data visualization to generate (if any) based on a user's selections.
In some cases, it can help to construct an object model of a data source before generating data visualizations. In some instances, one person is a particular expert on the data, and that person creates the object model. By storing the relationships in an object model, a data visualization application can leverage that information to assist all users who access the data, even if they are not experts. For example, other users can combine tables or augment an existing table or an object model.
An object is a collection of named attributes. An object often corresponds to a real-world object, event, or concept, such as a Store. The attributes are descriptions of the object that are conceptually at a 1:1 relationship with the object. Thus, a Store object may have a single [Manager Name] or [Employee Count] associated with it. At a physical level, an object is often stored as a row in a relational table, or as an object in JSON.
A class is a collection of objects that share the same attributes. It must be analytically meaningful to compare objects within a class and to aggregate over them. At a physical level, a class is often stored as a relational table, or as an array of objects in JSON.
An object model is a set of classes and a set of many-to-one relationships between them. Classes that are related by 1-to-1 relationships are conceptually treated as a single class, even if they are meaningfully distinct to a user. In addition, classes that are related by 1-to-1 relationships may be presented as distinct classes in the data visualization user interface. Many-to-many relationships are conceptually split into two many-to-one relationships by adding an associative table capturing the relationship.
Once a class model is constructed, a data visualization application can assist a user in various ways. In some implementations, based on data fields already selected and placed onto shelves in the user interface, the data visualization application can recommend additional fields or limit what actions can be taken to prevent unusable combinations. In some implementations, the data visualization application allows a user considerable freedom in selecting fields, and uses the object model to build one or more data visualizations according to what the user has selected.
In accordance with some implementations, a method facilitates visually building object models for data sources. The method is performed at a computer having one or more processors, a display, and memory. The memory stores one or more programs configured for execution by the one or more processors. The computer displays, in a connections region, a plurality of data sources. Each data source is associated with a respective one or more tables. The computer concurrently displays, in an object model visualization region, a tree having one or more data object icons. Each data object icon represents a logical combination of one or more tables. While concurrently displaying the tree of the one or more data object icons in the object model visualization region and the plurality of data sources in the connections region, the computer performs a sequence of operations. The computer detects, in the connections region, a first portion of an input on a first table associated with a first data source in the plurality of data sources. In response to detecting the first portion of the input on the first table, the computer generates a candidate data object icon corresponding to the first table. The computer also detects, in the connections region, a second portion of the input on the candidate data object icon. In response to detecting the second portion of the input on the candidate data object icon, the computer moves the candidate data object icon from the connections region to the object model visualization region. In response to moving the candidate data object icon to the object model visualization and while still detecting the input, the computer provides a visual cue to connect the candidate data object icon to a neighboring data object icon. The computer detects, in the object model visualization region, a third portion of the input on the candidate data object icon. In response to detecting the third portion of the input on the candidate data object icon, the computer displays a connection between the candidate data object icon and the neighboring data object icon, and updates the tree of the one or more data object icons to include the candidate data object icon.
In some implementations, prior to providing the visual cue, the computer performs a nearest object icon calculation that corresponds to the location of the candidate data object icon in the object model visualization region to identify the neighboring data object icon.
In some implementations, the computer provides the visual cue by displaying a Bézier curve between the candidate data object icon and the neighboring data object icon.
In some implementations, the computer detects, in the object model visualization region, a second input on a respective data object icon. In response to detecting the second input on the respective data object icon, the computer provides an affordance to edit the respective data object icon. In some implementations, the computer detects, in the object model visualization region, a selection of the affordance to edit the respective data object icon. In response to detecting the selection of the affordance to edit the respective data object icon, the computer displays, in the object model visualization region, a second set of one or more data object icons corresponding to the respective data object icon. In some implementations, the computer displays an affordance to revert to displaying a state of the object model visualization region prior to detecting the second input.
In some implementations, the computer displays a respective type icon corresponding to each data object icon. In some implementations, each type icon indicates if the corresponding data object icon specifies a join, a union, or custom SQL statements. In some implementations, the computer detects an input on a first type icon. In response to detecting the input on the first type icon, the computer displays an editor for editing the corresponding data object icon.
In some implementations, in response to detecting that the candidate data object icon is moved over a first data object icon in the object model visualization region, depending on the relative position of the first data object icon to the candidate data object icon, the computer either replaces the first data object icon with the candidate data object icon or displays shortcuts to combine the first data object icon with the candidate data object icon.
In some implementations, in response to detecting the third portion of the input on the candidate data object icon, the computer displays one or more affordances to select linking fields that connect the candidate data object icon with the neighboring data object icon. The computer detects a selection input on a respective affordance of the one or more affordances. In response to detecting the selection input, the computer updates the tree of the one or more data object icons according to a linking field corresponding to the selection input. In some implementations, a new or modified object model corresponding to the updated tree is saved.
In some implementations, the input is a drag and drop operation.
In some implementations, the computer generates the candidate data object icon by displaying the candidate data object icon in the connections region and superimposing the candidate data object icon over the first table.
In some implementations, the computer concurrently displays, in a data grid region, data fields corresponding to one or more of the data object icons. In some implementations, in response to detecting the third portion of the input on the candidate data object icon, the computer updates the data grid region to include data fields corresponding to the candidate data object icon.
In some implementations, the computer detects, in the object model visualization region, an input to delete a first data object icon. In response to detecting the input to delete the first data object icon, the computer removes one or more connections between the first data object icon and other data object icons in the object model visualization region, and updates the tree of the one or more data object icons to omit the candidate data object icon.
In some implementations, the computer displays a data prep flow icon corresponding to a data object icon, and detects an input on the data prep flow icon. In response to detecting the input on the data prep flow icon, the computer displays one or more steps of the data prep flow, which define a process for calculating data for the data object icon. In some implementations, the computer detects a data prep flow edit input on a respective step of the one or more steps of the data prep flow. In response to detecting the data prep flow edit input, the computer displays one or more options to edit the respective step of the data prep flow. In some implementations, the computer displays an affordance to revert to displaying a state of the object model visualization region prior to detecting the input on the data prep flow icon.
In another aspect, in accordance with some implementations, a method facilitates visualizing object models for data sources. The method is performed at a computer having one or more processors, a display, and memory. The memory stores one or more programs configured for execution by the one or more processors. The computer displays, in an object model visualization region, a first visualization of a tree of one or more data object icons. Each data object icon represents a logical combination of one or more tables. While concurrently displaying the first visualization in the object model visualization region, the computer detects, in the object model visualization region, a first input on a first data object icon of the tree of one or more data object icons. In response to detecting the first input on the first data object icon, the computer displays a second visualization of the tree of the one or more data object icons in a first portion of the object model visualization region. The computer also displays a third visualization of information related to the first data object icon in a second portion of the object model visualization region.
In some implementations, the computer obtains the second visualization of the tree of the one or more data object icons by shrinking the first visualization.
In some implementations, the computer detects a second input on a second data object icon. In response to detecting the second input on the second data object icon, the computer ceases to display the third visualization and displays a fourth visualization of information related to the second data object icon in the second portion of the object model visualization region. In some implementations, the computer resizes the first portion and the second portion according to (i) the size of the tree of the one or more data object icons, and (ii) the size of the information related to the second data object icon. In some implementations, the computer moves the second visualization to focus on the second data object icon in the first portion of the object model visualization region.
In some implementations, the computer displays, in the object model visualization region, one or more affordances to select filters to add to the first visualization.
In some implementations, the computer displays, in the object model visualization region, recommendations of one or more data sources to add objects to the tree of one or more data object icons.
In some implementations, prior to displaying the second visualization and the third visualization, the computer segments the object model visualization region into the first portion and the second portion according to (i) the size of the tree of the one or more data object icons, and (ii) the size of the information related to the first data object icon.
In some implementations, prior to displaying the second visualization and the third visualization, the computer generates a fourth visualization of information related to the first data object icon. The computer displays the fourth visualization by superimposing the fourth visualization over the first visualization while concurrently shrinking and moving the first visualization to the first portion in the object model visualization region.
In some implementations, the computer successively grows and/or moves the fourth visualization to form the third visualization in the second portion in the object model visualization region. In some implementations, the information related to the first data object icon includes a second tree of one or more data object icons.
In some implementations, the computer detects a third input in the second portion of the object model visualization region, away from the second visualization. In response to detecting the third input, the computer reverts to display the first visualization in the object model visualization region. In some implementations, reverting to display the first visualization in the object model visualization region includes ceasing to display the third visualization in the second portion of the object model visualization region, and successively growing and moving the second visualization to form the first visualization in the object model visualization region.
In accordance with some implementations, a system for generating data visualizations includes one or more processors, memory, and one or more programs stored in the memory. The programs are configured for execution by the one or more processors. The programs include instructions for performing any of the methods described herein.
In accordance with some implementations, a non-transitory computer readable storage medium stores one or more programs configured for execution by a computer system having one or more processors and memory. The one or more programs include instructions for performing any of the methods described herein.
Thus, methods, systems, and graphical user interfaces are provided for forming object models for data sources.
For a better understanding of the aforementioned implementations of the invention as well as additional implementations, reference should be made to the Description of Implementations below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
Like reference numerals refer to corresponding parts throughout the drawings.
Reference will now be made in detail to implementations, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details.
Some implementations allow a user to compose an object by combining multiple tables. Some implementations allow a user to expand an object via a join or a union with other objects. Some implementations provide drag-and-drop analytics to facilitate building an object model. Some implementations facilitate snapping and/or connecting objects or tables to an object model. These techniques and other related details are explained below in reference to
Some implementations of an interactive data visualization application use a data visualization user interface 108 to build a visual specification 110, as shown in
In most instances, not all of the visual variables are used. In some instances, some of the visual variables have two or more assigned data fields. In this scenario, the order of the assigned data fields for the visual variable (e.g., the order in which the data fields were assigned to the visual variable by the user) typically affects how the data visualization is generated and displayed.
As a user adds data fields to the visual specification (e.g., indirectly by using the graphical user interface to place data fields onto shelves), the data visualization application 234 groups (112) together the user-selected data fields according to the object model 106. Such groups are called data field sets. In many cases, all of the user-selected data fields are in a single data field set. In some instances, there are two or more data field sets. Each measure m is in exactly one data field set, but each dimension d may be in more than one data field set.
The data visualization application 234 queries (114) the data sources 102 for the first data field set, and then generates a first data visualization 118 corresponding to the retrieved data. The first data visualization 118 is constructed according to the visual variables in the visual specification 110 that have assigned data fields from the first data field set. When there is only one data field set, all of the information in the visual specification 110 is used to build the first data visualization 118. When there are two or more data field sets, the first data visualization 118 is based on a first visual sub-specification consisting of all information relevant to the first data field set. For example, suppose the original visual specification 110 includes a filter that uses a data field f. If the field f is included in the first data field set, the filter is part of the first visual sub-specification, and thus used to generate the first data visualization 118.
When there is a second (or subsequent) data field set, the data visualization application 234 queries (116) the data sources 102 for the second (or subsequent) data field set, and then generates the second (or subsequent) data visualization 120 corresponding to the retrieved data. This data visualization 120 is constructed according to the visual variables in the visual specification 110 that have assigned data fields from the second (or subsequent) data field set.
In some implementations, the memory 206 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM or other random-access solid-state memory devices. In some implementations, the memory 206 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. In some implementations, the memory 206 includes one or more storage devices remotely located from the CPUs 202. The memory 206, or alternatively the non-volatile memory devices within the memory 206, comprises a non-transitory computer-readable storage medium. In some implementations, the memory 206, or the computer-readable storage medium of the memory 206, stores the following programs, modules, and data structures, or a subset thereof:
Each of the above identified executable modules, applications, or set of procedures may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. In some implementations, the memory 206 stores a subset of the modules and data structures identified above. In some implementations, the memory 206 stores additional modules or data structures not described above.
Although
Continuing with the example, referring next to
As shown in
As shown in
In some implementations, as shown in
Continuing with the example,
Referring next to the screen shot in
Suppose, as shown in
Continuing the example, in
Reverting to the parent object model (consisting of the Line Items table 502 and the Orders object 506), as shown in
In
Some implementations determine and/or indicate valid, invalid, and/or probable object icons to associate the candidate object icon with. For example, some implementations determine probable neighbors based on known or predetermined relationships between the objects. As illustrated in
In contrast to the other objects in the object model, as shown in
As illustrated in
Referring next to
When the user selects the edit option 1004 for the object, as illustrated in the screen shot in
As shown in the screen shot shown in
The examples use a union drop target for illustration, but similar techniques can be applied for other types of objects or icons for visualization cues. In some implementations, an invisible revealer area is dedicated to showing a union drop target, as illustrated in
Referring next to
The computer displays (1408), in a connections region (e.g., the region 318), a plurality of data sources. Each data source is associated with a respective one or more tables. The computer concurrently displays (1410), in an object model visualization region (e.g., the region 304), a tree of one or more data object icons (e.g., the object icons 320-2, . . . , 320-12 in
Referring next to
The computer also detects (1420), in the connections region, a second portion of the input on the candidate data object icon. In response to detecting the second portion of the input on the candidate data object icon, the computer moves (1422) the candidate data object icon from the connections region to the object model visualization region.
Referring next to
The computer detects (1430), in the object model visualization region, a third portion of the input on the candidate data object icon. In response (1432) to detecting the third portion of the input on the candidate data object icon, the computer displays (1434) a connection between the candidate data object icon and the neighboring data object icon, and updates (1436) the tree of the one or more data object icons to include the candidate data object icon.
Referring next to
Referring next to
Referring next to
Referring next to
Referring next to
Referring next to
Referring next to
Suppose the user selects an object (e.g., by clicking while positioning the cursor on the object icon), as illustrated in the screen shot in
Referring next to
Referring next to
Referring next to
The computer displays (1608), in an object model visualization region (e.g., the region 304), a first visualization of a tree of one or more data object icons (e.g., as described above in reference to
The computer detects (1612), in the object model visualization region, a first input on a first data object icon of the tree of one or more data object icons. In response to detecting the first input on the first data object icon, the computer displays (1614) a second visualization of the tree of the one or more data object icons in a first portion of the object model visualization region. The computer also displays (1614) a third visualization of information related to the first data object icon in a second portion of the object model visualization region. Examples of these operations are described above in reference to
In some implementations, the computer obtains the second visualization of the tree of the one or more data object icons by shrinking the first visualization. For example, the visualization shown in the first portion 1508 in
In some implementations, the computer detects a second input on a second data object icon. In response to detecting the second input on the second data object icon, the computer ceases to display the third visualization and displays a fourth visualization of information related to the second data object icon in the second portion of the object model visualization region. For example, when the user selects the Products object 320-6 in
In some implementations, the computer displays, in the object model visualization region, one or more affordances to select filters (e.g., options 1502) to add to the first visualization.
In some implementations, the computer displays, in the object model visualization region, recommendations of one or more data sources (e.g., options 1504) to add objects to the tree of one or more data object icons.
In some implementations, prior to displaying the second visualization and the third visualization, the computer segments the object model visualization region to the first portion and the second portion according to (i) the size of the tree of the one or more data object icons, and (ii) the size of the information related to the first data object icon. For example, when transitioning from the display in
In some implementations, prior to displaying the second visualization and the third visualization, the computer generates a fourth visualization of information related to the first data object icon. The computer displays the fourth visualization by superimposing the fourth visualization over the first visualization while concurrently shrinking and moving the first visualization to the first portion in the object model visualization region.
In some implementations, the computer successively grows and/or moves the fourth visualization to form the third visualization in the second portion in the object model visualization region. In some implementations, the information related to the first data object icon includes a second tree of one or more data object icons (for the object corresponding to the first data object icon).
In some implementations, the computer detects a third input in the second portion of the object model visualization region, away from the second visualization. In response to detecting the third input, the computer reverts to display of the first visualization in the object model visualization region. In some implementations, reverting to display the first visualization in the object model visualization region includes ceasing to display the third visualization in the second portion of the object model visualization region, and successively growing and moving the second visualization to form the first visualization in the object model visualization region. Examples of these operations and user interfaces are described above in reference to
The terminology used in the description of the invention herein is for the purpose of describing particular implementations only and is not intended to be limiting of the invention. As used in the description of the invention and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof.
The foregoing description, for purpose of explanation, has been described with reference to specific implementations. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The implementations were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various implementations with various modifications as are suited to the particular use contemplated.
This application is a continuation of U.S. patent application Ser. No. 16/679,233, filed Nov. 10, 2019, entitled “Systems and Methods for Visualizing Object Models of Database Tables,” which is incorporated by reference herein in its entirety. This application is related to U.S. patent application Ser. No. 16/572,506, filed Sep. 16, 2019, entitled “Systems and Methods for Visually Building an Object Model of Database Tables,” which is incorporated by reference herein in its entirety. This application is related to U.S. patent application Ser. No. 16/236,611, filed Dec. 30, 2018, entitled “Generating Data Visualizations According to an Object Model of Selected Data Sources,” which claims priority to U.S. Provisional Patent Application No. 62/748,968, filed Oct. 22, 2018, entitled “Using an Object Model of Heterogeneous Data to Facilitate Building Data Visualizations,” each of which is incorporated by reference herein in its entirety. This application is related to U.S. patent application Ser. No. 16/236,612, filed Dec. 30, 2018, entitled “Generating Data Visualizations According to an Object Model of Selected Data Sources,” which is incorporated by reference herein in its entirety. This application is related to U.S. patent application Ser. No. 16/570,969, filed Sep. 13, 2019, entitled “Utilizing Appropriate Measure Aggregation for Generating Data Visualizations of Multi-fact Datasets,” which is incorporated by reference herein in its entirety. This application is related to U.S. patent application Ser. No. 15/911,026, filed Mar. 2, 2018, entitled “Using an Object Model of Heterogeneous Data to Facilitate Building Data Visualizations,” which claims priority to U.S. Provisional Patent Application 62/569,976, filed Oct. 9, 2017, “Using an Object Model of Heterogeneous Data to Facilitate Building Data Visualizations,” each of which is incorporated by reference herein in its entirety. This application is also related to U.S. patent application Ser. No. 14/801,750, filed Jul. 16, 2015, entitled “Systems and Methods for using Multiple Aggregation Levels in a Single Data Visualization,” and U.S. patent application Ser. No. 15/497,130, filed Apr. 25, 2017, entitled “Blending and Visualizing Data from Multiple Data Sources,” which is a continuation of U.S. patent application Ser. No. 14/054,803, filed Oct. 15, 2013, entitled “Blending and Visualizing Data from Multiple Data Sources,” now U.S. Pat. No. 9,633,076, which claims priority to U.S. Provisional Patent Application No. 61/714,181, filed Oct. 15, 2012, entitled “Blending and Visualizing Data from Multiple Data Sources,” each of which is incorporated by reference herein in its entirety. This application is related to U.S. Patent Application No. 16/679,111, filed Nov. 8, 2019, entitled “Using Visual Cues to Validate Object Models of Database Tables,” which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5297280 | Potts, Sr. et al. | Mar 1994 | A |
5511186 | Carhart et al. | Apr 1996 | A |
5917492 | Bereiter | Jun 1999 | A |
6189004 | Rassen et al. | Feb 2001 | B1 |
6199063 | Colby et al. | Mar 2001 | B1 |
6212524 | Weissman et al. | Apr 2001 | B1 |
6385604 | Bakalash et al. | May 2002 | B1 |
6397214 | Rogers | May 2002 | B1 |
6492989 | Wilkinson | Dec 2002 | B1 |
6532471 | Ku | Mar 2003 | B1 |
6807539 | Miller et al. | Oct 2004 | B2 |
7023453 | Wilkinson | Apr 2006 | B2 |
7039650 | Adams et al. | May 2006 | B2 |
7143339 | Weinberg et al. | Nov 2006 | B2 |
7176924 | Wilkinson | Feb 2007 | B2 |
7290007 | Farber et al. | Oct 2007 | B2 |
7302383 | Valles et al. | Nov 2007 | B2 |
7302447 | Dettinger et al. | Nov 2007 | B2 |
7337163 | Srinivasan et al. | Feb 2008 | B1 |
7426520 | Gorelik et al. | Sep 2008 | B2 |
7603267 | Wang et al. | Oct 2009 | B2 |
7800613 | Hanrahan et al. | Sep 2010 | B2 |
7941521 | Petrov et al. | May 2011 | B1 |
8082243 | Gorelik et al. | Dec 2011 | B2 |
8442999 | Gorelik et al. | May 2013 | B2 |
8874613 | Gorelik et al. | Oct 2014 | B2 |
9165029 | Bhoovaraghavan et al. | Oct 2015 | B2 |
9336253 | Gorelik et al. | May 2016 | B2 |
9411797 | Campbell et al. | Aug 2016 | B2 |
9430469 | Lam et al. | Aug 2016 | B2 |
9501585 | Gautam et al. | Nov 2016 | B1 |
9563674 | Hou et al. | Feb 2017 | B2 |
9613086 | Sherman | Apr 2017 | B1 |
9710527 | Sherman | Jul 2017 | B1 |
9779150 | Sherman et al. | Oct 2017 | B1 |
9818211 | Gibb et al. | Nov 2017 | B1 |
9858292 | Setlur et al. | Jan 2018 | B1 |
9886460 | Cushing et al. | Feb 2018 | B2 |
10418032 | Mohajer et al. | Sep 2019 | B1 |
10515121 | Setlur et al. | Dec 2019 | B1 |
10546001 | Nguyen et al. | Jan 2020 | B1 |
10546003 | Gupta et al. | Jan 2020 | B2 |
10997217 | Nielsen | May 2021 | B1 |
11360991 | Morton | Jun 2022 | B1 |
11429264 | Weir | Aug 2022 | B1 |
11475052 | Nielsen | Oct 2022 | B1 |
20010054034 | Arning et al. | Dec 2001 | A1 |
20020055939 | Nardone et al. | May 2002 | A1 |
20020059267 | Shah et al. | May 2002 | A1 |
20030004959 | Kotsis et al. | Jan 2003 | A1 |
20030023608 | Egilsson et al. | Jan 2003 | A1 |
20040103088 | Cragun et al. | May 2004 | A1 |
20040122844 | Malloy et al. | Jun 2004 | A1 |
20040139061 | Colossi et al. | Jul 2004 | A1 |
20040243593 | Stolte et al. | Dec 2004 | A1 |
20050038767 | Verschell et al. | Feb 2005 | A1 |
20050060300 | Stolte et al. | Mar 2005 | A1 |
20050076045 | Stenslet et al. | Apr 2005 | A1 |
20050114368 | Gould et al. | May 2005 | A1 |
20050182703 | D'hers et al. | Aug 2005 | A1 |
20060004746 | Angus et al. | Jan 2006 | A1 |
20060010143 | Netz et al. | Jan 2006 | A1 |
20060149768 | McCormack et al. | Jul 2006 | A1 |
20060167924 | Bradlee et al. | Jul 2006 | A1 |
20060173813 | Zorola | Aug 2006 | A1 |
20060206512 | Hanrahan et al. | Sep 2006 | A1 |
20060294081 | Dettinger et al. | Dec 2006 | A1 |
20060294129 | Stanfill et al. | Dec 2006 | A1 |
20070006139 | Rubin et al. | Jan 2007 | A1 |
20070129936 | Wang et al. | Jun 2007 | A1 |
20070156734 | Dipper et al. | Jul 2007 | A1 |
20070255685 | Boult | Nov 2007 | A1 |
20080016026 | Farber et al. | Jan 2008 | A1 |
20080027957 | Bruckner et al. | Jan 2008 | A1 |
20080027970 | Zhuge et al. | Jan 2008 | A1 |
20090006370 | Li et al. | Jan 2009 | A1 |
20090313576 | Neumann et al. | Dec 2009 | A1 |
20090319548 | Brown et al. | Dec 2009 | A1 |
20100005054 | Smith et al. | Jan 2010 | A1 |
20100005114 | Dipper | Jan 2010 | A1 |
20100036800 | Gui et al. | Feb 2010 | A1 |
20100077340 | French et al. | Mar 2010 | A1 |
20110119047 | Ylonen et al. | May 2011 | A1 |
20110131250 | Stolte et al. | Jun 2011 | A1 |
20120116850 | Abe et al. | May 2012 | A1 |
20120117453 | Mackinlay et al. | May 2012 | A1 |
20120191698 | Albrecht et al. | Jul 2012 | A1 |
20120284670 | Kashik et al. | Nov 2012 | A1 |
20120323948 | Li et al. | Dec 2012 | A1 |
20130080584 | Benson | Mar 2013 | A1 |
20130159307 | Wolge et al. | Jun 2013 | A1 |
20130166498 | Aski et al. | Jun 2013 | A1 |
20130191418 | Martin, Jr. et al. | Jul 2013 | A1 |
20130249917 | Fanning et al. | Sep 2013 | A1 |
20140181151 | Mazoue et al. | Jun 2014 | A1 |
20140189553 | Bleizeffer et al. | Jul 2014 | A1 |
20150261728 | Davis et al. | Sep 2015 | A1 |
20150278371 | Anand et al. | Oct 2015 | A1 |
20160092090 | Stojanovic et al. | Mar 2016 | A1 |
20160092530 | Jakubiak et al. | Mar 2016 | A1 |
20160092601 | Lamas et al. | Mar 2016 | A1 |
20170091277 | Zoch | Mar 2017 | A1 |
20170132277 | Hsiao et al. | May 2017 | A1 |
20170357693 | Kumar et al. | Dec 2017 | A1 |
20180024981 | Xia et al. | Jan 2018 | A1 |
20180032576 | Romero | Feb 2018 | A1 |
20180039614 | Govindarajulu et al. | Feb 2018 | A1 |
20180129513 | Gloystein et al. | May 2018 | A1 |
20180158245 | Govindan | Jun 2018 | A1 |
20180203924 | Agrawal et al. | Jul 2018 | A1 |
20180210883 | Ang et al. | Jul 2018 | A1 |
20180329987 | Tata et al. | Nov 2018 | A1 |
20180336223 | Kapoor et al. | Nov 2018 | A1 |
20190065565 | Stolte et al. | Feb 2019 | A1 |
20190108272 | Talbot et al. | Apr 2019 | A1 |
20190121801 | Jethwa et al. | Apr 2019 | A1 |
20190138648 | Gupta et al. | May 2019 | A1 |
20190197605 | Sadler et al. | Jun 2019 | A1 |
20190236144 | Hou et al. | Aug 2019 | A1 |
20190384815 | Patel et al. | Dec 2019 | A1 |
20200065385 | Dreher et al. | Feb 2020 | A1 |
20200073876 | Lopez et al. | Mar 2020 | A1 |
20200089700 | Ericson et al. | Mar 2020 | A1 |
20200089760 | Ericson et al. | Mar 2020 | A1 |
20200110803 | Djalali et al. | Apr 2020 | A1 |
20200125239 | Talbot | Apr 2020 | A1 |
20200125559 | Talbot et al. | Apr 2020 | A1 |
20200134103 | Mankovskii | Apr 2020 | A1 |
20200233905 | Williams et al. | Jul 2020 | A1 |
20200401581 | Eubank | Dec 2020 | A1 |
20210097065 | Beers | Apr 2021 | A1 |
20210256039 | Weir | Aug 2021 | A1 |
Entry |
---|
Ganapavurapu, “Designing and Implementing a Data Warehouse Using Dimensional Modling,” Thesis Dec. 7, 2014, XP055513055, retrieved from Internet: UEL:https://digitalepository.unm.edu/cgi/viewcontent.cgi?article= 1091&context-ece_etds, 87 pgs. |
Gyldenege, Preinterview First Office Action, U.S. Appl. No. 16/221,413, Jun. 11, 2020, 4 pgs. |
Gyldenege, First Action Interview Office Action, U.S. Appl. No. 16/221,413, Jul 27, 2020, 4 pgs. |
Gyldenege, Notice of Allowance, U.S. Appl. No. 16/221,413, Jan. 12, 2021, 12 pgs. |
Mansmann, “Extending the OLAP Technology to Handle Non-Conventional and Complex Data,” Sep. 29, 2008, XP055513939, retrieve from URL/https://kops.uni-konstanz.de/hadle/123456789/5891, 1 pg. |
Milligan et al., (Tableau 10 Complete Reference, Copyright © 2018 Packt Publishing Ltd., ISBN 978-1-78995-708-2., Electronic edition excerpts retrived on [Sep. 23, 2020] from https://learning.orelly.com/, 144 pgs., (Year:2018). |
“Mondrian 3.0.4 Technical Guide”, 2009 (Year: 2009), 254 pgs. |
Morton, Office Action, U.S. Appl. No. 14/054,803, Sep. 11, 2015, 22 pgs. |
Morton, Final Office Action, U.S. Appl. No. 14/054,803, May 11, 2016, 22 pgs. |
Morton, Notice of Allowance, U.S. Appl. No. 14/054,803, Mar. 1, 2017, 23 pgs. |
Morton, Preinterview 1st Office Action, U.S. Appl. No. 15/497,130, Sep. 18, 2019, 6 pgs. |
Morton, First Action Interview Office Action, U.S. Appl. No. 15/497,130, Feb. 19, 2020, 26 pgs. |
Morton, Final Office Action, U.S. Appl. No. 15/497,130, Aug. 12, 2020, 19 pgs. |
Morton, Office Action, U.S. Appl. No. 15/497,130, Jan. 8, 2021, 20 pgs. |
Morton, Office Action, U.S. Appl. No. 15/497,130, Jun. 15, 2021, 35 pgs. |
Morton, Notice of Allowance, U.S. Appl. No. 15/497,130, Feb. 11, 2022, 11 pgs. |
Setlur, Preinterview First Office Action, U.S. Appl. No. 16/234,470, Sep. 24, 2020, 6 pgs. |
Setlur, First Action Interview Office Action, U.S. Appl. No. 16/234,470, Oct. 28, 2020, 4 pgs. |
Sleeper, Ryan (Practical Tableau, Copyright © 2018 Evolytics and Ryan Sleeper, Published by O'Reilly Media, Inc., ISBN 978-1-491-97731, Electronics edition excerpts retrieved on [Sep. 23, 2020] from https://learning.orelly.com/, 101 pgs. (Year:2018). |
Song et al., “SAMSTAR,” Data Warehousing and OLAP, ACM, 2 Penn Plaza, Suite 701, New York, NY, Nov. 9, 2007, XP058133701, pp. 9 to 16, 8 p. |
Tableau Software, Inc., International Search Report and Written Opinion, PCTUS2019056491, Jan. 2, 2020, 11 pgs. |
Tableau Software, Inc., International Search Report and Written Opinion, PCTUS2018/044878, Oct. 22, 2018, 15 pgs. |
Tableau Software, Inc., International Preliminary Report on Patentability, PCTUS2018/044878, Apr. 14, 2020, 12 pgs. |
Tableau All Releases, retrieved on [Oct. 2, 2020] from https://www.tableau.com/products/all-features, 49 pgs. (Year:2020). |
Talbot, Preinterview First Office Action, U.S. Appl. No. 16/236,611, Oct. 28, 2020, 6 pgs. |
Talbot, Office Action, U.S. Appl. No. 14/801,750, May 7, 2018, 60 pgs. |
Talbot, Final Office Action, U.S. Appl. No. 14/801,750, Nov. 28, 2018, 63 pgs. |
Talbot, Office Action, U.S. Appl. No. 14/801,750, Jun. 24, 2019, 55 pgs. |
Talbot, Notice of Allowance, U.S. Appl. No. 14/801,750. Dec. 22, 2021, 9 pgs. |
Talbot, Preinterview First Office Action, U.S. Appl. No. 15/911,026, Jun. 9, 2020, 6 pgs. |
Talbot, First Action Interview Office Action, U.S. Appl. No. 15/911,026, Jul. 22, 2020, 6 pgs. |
Talbot, Preinterview First Office Action, U.S. Appl. No. 16/236,612, Oct. 29, 2020, 6 pgs. |
Talbot, First Action Interview Office Action, U.S. Appl. No. 16/236,611, Dec. 22, 2020, 5 pgs. |
Talbot, Final Office Action, U.S. Appl. No. 16/236,611, Apr. 27, 2021, 21 pgs. |
Talbot, Office Action, U.S. Appl. No. 16/236,611, Oct. 4, 2021, 23 pgs. |
Talbot, Office Action, U.S. Appl. No. 16/675,122, Oct. 8, 2020, 18 pgs. |
Weir, Office Action, Oct. 1, 2020, 9 pgs. |
Weir, Notice of Allowance, Jan. 11, 2021, 8 pgs. |
Borden, Preinterview First Office Action, U.S. Appl. No. 16/905,819, Oct. 28, 2021, 4 pgs. |
Borden, Notice of Allowance, U.S. Appl. No. 16/905,819, Nov. 24, 2021, 19 pgs. |
Eubank, Office Action, U.S. Appl. No. 16/579,762, Feb. 19, 2021, 9 pgs. |
Eubank, Notice of Allowance, U.S. Appl. No. 16/579,762, Aug. 18, 2021, 15 pgs. |
Eubank, Office Action, U.S. Appl. No. 16/570,969, Jun. 15, 2021, 12 pgs. |
Eubank, Final Office Action, U.S. Appl. No. 16/570,969, Dec. 1, 2021, 16 pgs. |
Eubank, Notice of Allowance, U.S. Appl. No. 16/570,969, Dec. 18, 2023, 7 pgs. |
Morton, Office Action, U.S. Appl. No. 17/840,546, Dec. 22, 2022, 37 pgs. |
Morton, Final Office Action, U.S. Appl. No. 17/840,546, Aug. 5, 2023, 39 pgs. |
Talbot, Final Office Action, U.S. Appl. No. 15/911,026, Dec. 16, 2020, 28 pgs. |
Talbot, Notice of Allowance, U.S. Appl. No. 15/911,026, Nov. 23, 2022, 9 pgs. |
Talbot, Preinterview First Office Action, U.S. Appl. No. 16/236,612, Apr. 28, 2021, 20 pgs. |
Talbot, Notice of Allowance, U.S. Appl. No. 16/236,611, Dec. 13, 2023, 8 pgs. |
Talbot, Office Action, U.S. Appl. No. 16/236,612, Oct. 5, 2021, 22 pgs. |
Morton, Office Action, U.S. Appl. No. 17/840,546, Jul. 2, 2024, 31 pgs. |
Number | Date | Country | |
---|---|---|---|
20210256039 A1 | Aug 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16679233 | Nov 2019 | US |
Child | 17307427 | US |