Embodiments of the present invention are directed to methods, systems and articles of manufacture for efficiently calculating a tax return in a tax return preparation application operable to prepare an electronic tax return.
The embodiments of the present invention may be implemented on and/or within a tax return preparation system comprising a tax preparation software application executing on a computing device. The tax return preparation system may operate on a new construct in which tax rules and the calculations based thereon are established in declarative data-structures, namely, completeness graph(s) and tax calculation graph(s). Use of these data-structures permits the user interface to be loosely connected or even divorced from the tax calculation engine and the data used in the tax calculations. Tax calculations are dynamically calculated based on tax-related data that is input from a user, derived from sourced data, or estimated. A smart tax logic agent running on a set of rules can review current run time data and evaluate missing tax data necessary to prepare and complete a tax return. The tax logic agent proposes suggested questions to be asked to a user to fill in missing blanks. This process can be continued until completeness of all tax topics has occurred. A completed tax return (e.g., a printed tax return or an electronic tax return) can then be prepared and filed with respect to the relevant taxing jurisdictions.
In another aspect of the tax return preparation system, a computer-implemented method of calculating tax liability includes the operations of a computing device establishing a connection to a shared data store configured to store user-specific tax data therein. The computing device executes a tax calculation engine configured to read and write tax calculation data to and from the shared data store, the tax calculation engine using one or more of the calculation graphs specific to particular tax topics. The computing device executes a tax logic agent, the tax logic agent reading from the shared data store and a plurality of decision tables collectively representing a completion graph for computing tax liability or a portion thereof, the tax logic agent outputting one or more suggestions for missing tax data based on an entry in one of the plurality of decision tables. The computing device executes a user interface manager configured to receive the one or more suggestions and present to a user one or more questions based on the one or more suggestions via a user interface, wherein a user response to the one or more questions is input to the shared data store. The user interface manager which receives the suggestion(s) selects one or more suggested questions to be presented to a user. Alternatively, the user interface manager may ignore the suggestion(s) and present a different question or prompt to the user.
In the event that all tax topics are covered, the tax logic agent, instead of outputting one or more suggestions for missing tax data may output a “done” instruction to the user interface manager. The computing device may then prepare a tax return based on the data in the shared data store. The tax return may be a conventional paper-based return or, alternatively, the tax return may be an electronic tax return which can then be e-filed.
The one or more suggestions may be questions, declarative statements, or confirmations that are output by the tax logic agent. The one or more suggestions may include a ranked listing of suggestions. The ranking may be weighted in order of importance, relevancy, confidence level, or the like. Statistical data may be incorporated by the tax logic agent to be used as part of the ranking.
In another aspect of the tax return preparation system, the tax return preparation software running on the computing device imports tax data into the shared data store. The importation of tax data may come from one or more third party data sources. The imported tax data may also come from one or more prior year tax returns. In another aspect of the invention, the shared data store may be input with one or more estimates.
In still another feature, the tax return preparation system comprises a computer-implemented system for calculating tax liability. The system includes a computing device operably coupled to the shared data store which is configured to store user-specific tax data therein. The computing device executes a tax calculation engine. The tax calculation engine accesses taxpayer-specific tax data from the shared data store, and is configured to read and to read and write data to and from the shared data store. The tax calculation engine performs the tax calculations based on one or more tax calculation graphs. The tax calculation graph may be a single overall calculation graph for all of the tax calculations required to calculate a tax return, or it may comprise a plurality of tax topic calculation graphs specific to particular tax topics which may be compiled to form the overall tax calculation graph. The computing device executes a tax logic agent, the tax logic agent reading from the shared data store and a plurality of decision tables collectively representing a completion graph for computing tax liability or a portion thereof, the tax logic agent outputting one or more suggestions for missing tax data based on one of the plurality of decision tables. The computing device executes a user interface manager configured to receive the one or more suggestions and present to a user with one or more questions based on the one or more suggestions via a user interface, wherein a user response to the one or more questions is input to the shared data store.
One embodiment of the present invention is directed to methods for efficiently calculating a tax return while preparing an electronic tax return with the tax return preparation system. The tax return preparation system accesses the taxpayer-specific tax data from the shared data store, as described above. Upon receiving a certain amount of user-specific tax data, the tax return preparation system will typically perform a preliminary (i.e. not final) tax calculation. The system may be configured to perform preliminary tax calculations at various stages of the tax return preparation process, for example, in order to give the user an indication of their final tax liability, tax refund and/or taxes owed. Accordingly, the tax return preparation system executes the tax calculation engine to perform a first tax calculation based on the taxpayer-specific tax data read from the shared data store, which results in a first calculated tax data. As explained above, the tax calculation engine performs a plurality of tax calculations based on the tax calculation graph. The first calculated tax data may include the calculated values for the complete tax calculation graph for which data is available (including estimated tax data as described herein), or any subset of the tax calculation graph. Thus, the first calculated tax data may include the calculated values for an overall tax liability, a tax refund (if any), a tax owed (if any), and/or any intermediate calculations used to determine any of the foregoing.
As described above, the system then receives additional taxpayer-specific tax data (referred to as “new taxpayer-specific tax data”) which was not utilized in the first tax calculation, such as responses received in response to the questions presented to the user by the user interface manager which questions are based on the suggestions output by the tax logic agent. After receiving some amount of new taxpayer-specific tax data, the tax return preparation system executes the tax calculation engine to perform a second tax calculation. For instance, the system can perform the second tax calculation after receiving each response or additional tax-payer specific data, or after receiving complete information for a tax topic, or after a certain elapsed time period during preparation of the tax return, or other suitable point in the process of preparing a tax return.
In the second tax calculation, the tax calculation engine ONLY performs those calculation in the tax calculation graph which are changed by the new taxpayer-specific tax data. In other words, if a calculation in the tax calculation graph is not changed by the new taxpayer-specific tax data, the calculation is not performed, because it does need to be performed since it will have no effect on the tax calculations. For instance, if the new taxpayer-specific tax data is related to a particular tax topic, then the calculations in only that tax topic part of the tax calculation graph may need to be calculated, as well as any other parts of the calculation graph changed by a change in that tax topic part of the tax calculation graph. And if the second calculation of the tax topic part of the tax calculation graph does not result in a change in any value(s) utilized by other parts of the calculation graph, then no other part of the tax calculation graph needs to be calculated during the second tax calculation. The second calculation results in a second calculated tax data, which may include the same types of calculated data a described above for the first calculated tax data.
In this way, the system very efficiently performs the tax calculations because it avoids doing calculations which are not affected by the new taxpayer-specific tax data.
In another aspect of the present invention, the tax return preparation system may determine whether there is a difference between the first tax calculation data and the second tax calculation data, and also identify a reason why there is, or is not a difference between the first and second tax calculation data. The system may also present the reason why there is, or is not, a difference between the first and second tax calculation data to the user. For example, if there is no difference in the tax refund between the first and second tax calculation data, a user may be interested to know the reason why the new taxpayer-specific tax data did not affect the taxpayer's tax refund. Conversely, if there is a difference in the tax refund between first and second tax calculation data, the user may also be interested to know the reason for the difference. As an example, the new taxpayer-specific tax data may have qualified the taxpayer for an additional tax deduction, which the user may find of interest.
In still another aspect, the tax calculation graph may comprise a plurality of calculation paths connecting a plurality of nodes. Each of the nodes represents at least one of a tax data input, a tax concept, a function or a tax calculation. For instance, a node may be the number of dependents of the taxpayer, the qualifying data for a dependent, a decision function for determining eligibility as a dependent, etc. The calculation paths define data dependencies between the nodes, in which case a node has a data dependency on another node if a determination at such node depends on a value of the other node. A node has a direct dependency on another node if it is directly dependent on the other node without any intervening nodes. A node has an indirect dependency on another node if it is dependent on a node which is directly dependent on the other node or an intervening node. Then, to perform the second tax calculation, the tax return preparation system executes the tax calculation engine by only performing those calculations at nodes in the tax calculation graph having a direct or indirect data dependency on a node utilizing the new taxpayer-specific tax data and which value of the node from which it depends is changed from its value obtained by the first tax calculation. Thus, the second tax calculation only performs those calculations which are changed by the new taxpayer-specific tax data.
In yet another aspect, at least one of the nodes is interconnected to another node, or represents, a “gist” function, and the tax return preparation system uses the gist function to determine the reason why there is, or is not, a difference between the first and second tax calculation data. A “gist” function is a well-defined function to capture domain specific patterns and semantic abstractions used in tax calculations. Gists can be de-coupled from a specific narrow definition and instead be associated with one or more explanation. Examples of common “gists” found in tax legislation/rules include the concepts of “caps” or “exceptions” that are found in various portions of the tax code.
Another embodiment of the present invention is directed to a system for efficiently calculating a tax return using one or more of the described methods. The system includes or comprises a tax return preparation system comprising a computing device (i.e. a computer) having a computer processor, and a tax return preparation software application executable on the computing device. The computing device is operably coupled to a shared data store configured to store user-specific tax data for one or more taxpayers. The tax return preparation software application includes a tax calculation engine configured to read user-specific tax data from a shared data store and write calculated tax data to the shared data store. The tax calculation engine is configured to perform a plurality of tax calculations based on a tax calculation graph. The system may also include servers, data storage devices, and one or more displays. The tax return preparation system is configured and programmed to perform a process according to any of the method embodiments of the present invention. For instance, the system may be configured for: accessing taxpayer-specific tax data from the shared data store; executing the tax calculation engine to perform a first tax calculation based on taxpayer-specific tax data read from the shared data store and determining a first calculated tax data; receiving new taxpayer-specific tax data, the new taxpayer-specific tax data not utilized in the first tax calculation; and executing the tax calculation engine to perform a second tax calculation, the tax calculation engine only performing those calculations in the tax calculation graph affected by the new taxpayer-specific tax data and determining a second calculated tax data.
In addition, the tax return preparation system may be implemented on a computing system operated by the user or an online application operating on a web server and accessible using a computing device via a communications network such as the internet.
In additional aspects, the tax return preparation system may be further configured according to the additional aspects described above for the method of calculating an electronic tax return.
Another embodiment of the present invention is directed to an article of manufacture comprising a non-transitory computer readable medium embodying instructions executable by a computer to execute a process according to any of the method embodiments of the present invention for calculating an electronic tax return. For instance, the non-transitory computer readable medium embodying instructions executable by a computer may be configured to execute a process comprising: accessing taxpayer-specific tax data from the shared data store; executing the tax calculation engine to perform a first tax calculation based on taxpayer-specific tax data read from the shared data store and determining a first calculated tax data; receiving new taxpayer-specific tax data, the new taxpayer-specific tax data not utilized in the first tax calculation; and executing the tax calculation engine to perform a second tax calculation, the tax calculation engine only performing those calculations in the tax calculation graph affected by the new taxpayer-specific tax data and determining a second calculated tax data.
It is understood that the steps of the methods and processes of the present invention are not required to be performed in the order as shown in the figures or as described, but can be performed in any order that accomplishes the intended purpose of the methods and processes.
Embodiments of the present invention are directed to methods, systems and articles of manufacture for efficiently calculating an electronic tax return, such as within a tax return preparation system. In general, a computerized tax return preparation system accesses taxpayer-specific tax data from a shared data store configured to store therein taxpayer-specific tax data for a taxpayer. The system then executes a tax calculation engine configured to read the taxpayer-specific tax data and write calculated tax data to the shared data store, and also configured to perform a plurality of tax calculations based on a tax calculation graph. As explained below in more detail, the tax calculation graph semantically represents the tax legislation/tax rules for the tax return and the data structures that capture the conditions necessary to complete the computations that are required to calculate an electronic tax return. The complete tax calculation graph may have hundreds or even thousands of calculations depending on the complexity of the tax code for the tax return and the tax situation of the taxpayer. In addition, the tax preparation applications typically perform the tax calculations periodically, such as when new taxpayer-specific tax data has been received, or after receiving complete information for a tax topic, or after a certain time period has elapsed during preparation of the tax return, or other suitable point in the process of preparing a tax return. Accordingly, in order to efficiently perform the tax calculations, which also efficiently utilizes the computing power of the system, the system is configured to perform only the calculations which are changed by the new taxpayer-specific tax data received since the previous tax calculation executed by the tax calculation engine. Thus, if a calculation in the tax calculation graph is not changed by the new taxpayer-specific tax data received since the previous tax calculation. The system may also determine whether the new taxpayer-specific tax data does, or does not change the calculated tax return and the reason why. This information may be of interest to a user for various reasons, such as tax planning, error checking and/or other reasons.
Tax preparation is a time-consuming and laborious process. It is estimated that individuals and businesses spend around 6.1 billion hours per year complying with the filing requirements of the Internal Revenue Code. Tax return preparation software has been commercially available to assist taxpayers in preparing their tax returns. Tax return preparation software is typically run on a computing device such as a computer, laptop, tablet, or mobile computing device such as a Smartphone. Traditionally, a user has walked through a set of rigidly defined user interface interview screens that selectively ask questions that are relevant to a particular tax topic or data field needed to calculate a taxpayer's tax liability.
In contrast to the rigidly defined user interface screens used in prior iterations of tax preparation software, the current invention provides tax preparation software 100 that may run on computing devices 102 that operate on a new construct in which tax rules and the calculations based thereon are established in declarative data-structures, namely, completeness graph(s) and tax calculation graph(s). Use of these data-structures permits the user interface to be loosely connected or even divorced from the tax calculation engine and the data used in the tax calculations. Tax calculations are dynamically calculated based on tax data derived from sourced data, estimates, or user input. A smart tax logic agent running on a set of rules can review current run time data and evaluate missing data fields and propose suggested questions to be asked to a user to fill in missing blanks. This process can be continued until completeness of all tax topics has occurred. An electronic return can then be prepared and filed with respect to the relevant taxing jurisdictions.
Note that in
The completeness graph 12 and the tax calculation graph 14 represent data structures that can be constructed in the form of tree.
As one can imagine given the complexities and nuances of the tax code, many tax topics may contain completeness graphs 12 that have many nodes with a large number of pathways to completion. However, by many branches or lines within the completeness graph 12 can be ignored, for example, when certain questions internal to the completeness graph 12 are answered that eliminate other nodes 20 and arcs 22 within the completeness graph 12. The dependent logic expressed by the completeness graph 12 allows one to minimize subsequent questions based on answers given to prior questions. This allows a minimum question set that can be generated that can be presented to a user as explained herein.
As explained herein, the directed graph or completion graph 12 that is illustrated in
Referring to
After an initial question has been presented and rows are eliminated as a result of the selection, next, a collection of candidate questions from the remaining available rows 32a and 32b is determined. From this universe of candidate questions from the remaining rows, a candidate question is selected. In this case, the candidate questions are questions QC and QG in columns 34c, 34g, respectively. One of these questions is selected and the process repeats until either the goal 34h is reached or there is an empty candidate list.
Still other internal nodes 26 semantically represent a tax concept and may be calculated using a function node 28. Some or all of these internal nodes 26 may be labeled as “tax concepts.” Interconnected nodes 26 containing tax concepts may be connected via “gist” functions that can be tagged and later be used or called upon to explain to the user the reasoning behind why a particular result was calculated or determined by the tax preparation software 100 program as explained in more detail below. Gists are well-defined functions to capture domain specific patterns and semantic abstractions used in tax calculations. Gists can be de-coupled from a specific narrow definition and instead be associated with one or more explanation. Examples of common “gists” found in tax legislation/rules include the concepts of “caps” or “exceptions” that are found in various portions of the tax code. The function node 28 may include any number of mathematical or other operations. Examples of functions 28 include summation, subtraction, multiplication, division, and look-ups of tables or values from a database 30 or library as is illustrated in
The calculation graph 14 also has a plurality of calculation paths connecting the nodes 24, 26 and 28, which define data dependencies between the nodes. A second node is considered to be dependent on a first node if a calculation (calculation includes any determination within the calculation graph, such as function, decisions, etc.) at the second node depends on a value of the first node. A second node has a direct dependency on the first node if it is directly dependent on the first node without any intervening nodes. A second node has an indirect dependency on the first node if it is dependent on a node which is directly dependent on the first node or an intervening node along a calculation path to the first node. Although there are many more calculation paths in the calculation graph 14 of
The schema 44 may be a modified version of the MeF schema used by the IRS. For example, the schema 44 may be an extended or expanded version (designated MeF++) of the MeF model established by government authorities. While the particular MeF schema 44 is discussed herein the invention is not so limited. There may be many different schemas 44 depending on the different tax jurisdiction. For example, Country A may have a tax schema 44 that varies from Country B. Different regions or states within a single country may even have different schemas 44. The systems and methods described herein are not limited to a particular schema 44 implementation. The schema 44 may contain all the data fields required to prepare and file a tax return with a government taxing authority. This may include, for example, all fields required for any tax forms, schedules, and the like. Data may include text, numbers, and a response to a Boolean expression (e.g., True/False or Yes/No). As explained in more detail, the shared data store 42 may, at any one time, have a particular instance 46 of the MeF schema 44 (for MeF++ schema) stored therein at any particular time. For example,
As seen in
For example, user input 48a is one type of data source 48. User input 48a may take a number of different forms. For example, user input 48a may be generated by a user using, for example, a input device such as keyboard, mouse, touchscreen display, voice input (e.g., voice to text feature) or the like to enter information manually into the tax preparation software 100. For example, as illustrated in
User input 48a may also include some form of automatic data gathering. For example, a user may scan or take a photographic image of a tax document (e.g., W-2 or 1099) that is then processed by the tax preparation software 100 to extract relevant data fields that are then automatically transferred and stored within the data store 42. OCR techniques along with pre-stored templates of tax reporting forms may be called upon to extract relevant data from the scanned or photographic images whereupon the data is then transferred to the shared data store 42.
Another example of a data source 48 is a prior year tax return 48b. A prior year tax return 48b that is stored electronically can be searched and data is copied and transferred to the shared data store 42. The prior year tax return 48b may be in a proprietary format (e.g., .txf, .pdf) or an open source format. The prior year tax return 48b may also be in a paper or hardcopy format that can be scanned or imaged whereby data is extracted and transferred to the shared data store 42. In another embodiment, a prior year tax return 48b may be obtained by accessing a government database (e.g., IRS records).
An additional example of a data source 48 is an online resource 48c. An online resource 48c may include, for example, websites for the taxpayer(s) that contain tax-related information. For example, financial service providers such as banks, credit unions, brokerages, investment advisors typically provide online access for their customers to view holdings, balances, transactions. Financial service providers also typically provide year-end tax documents to their customers such as, for instance, 1099-INT (interest income), 1099-DIV (dividend income), 1099-B (brokerage proceeds), 1098 (mortgage interest) forms. The data contained on these tax forms may be captured and transferred electronically to the shared data store 42.
Of course, there are additional examples of online resources 48c beyond financial service providers. For example, many taxpayers may have social media or similar accounts. These include, by way of illustration and not limitation, Facebook, Linked-In, Twitter, and the like. User's may post or store personal information on these properties that may have tax implications. For example, a user's Linked-In account may indicate that a person changed jobs during a tax year. Likewise, a posting on Facebook about a new home may suggest that a person has purchased a home, moved to a new location, changed jobs; all of which may have possible tax ramifications. This information is then acquired and transferred to the shared data store 42, which can be used to drive or shape the interview process described herein. For instance, using the example above, a person may be asked a question whether or not she changed jobs during the year (e.g., “It looks like you changed jobs during the past year, is this correct?”. Additional follow-up questions can then be presented to the user.
Still referring to
Still referring to
Referring now to
As an example using the sample calculation graph 14 of
At step 1216, the system 40 then receives additional taxpayer-specific tax data (referred to as “new taxpayer-specific tax data”) which was not utilized in the first tax calculation, by any of the processes described herein, such as responses received in response to the questions presented to the user by the user interface manager which questions are based on the suggestions output by the tax logic agent.
After receiving some amount of new taxpayer-specific tax data, at step 1218, the tax return preparation system executes the tax calculation engine to perform a second tax calculation only for these calculations in the tax calculation graph(s) 14 which are changed by the new taxpayer-specific tax data. For instance, the system can perform the second tax calculation after receiving each response or additional tax-payer specific data, or after receiving complete information for a tax topic, or after a certain elapsed time period during preparation of the tax return, or other suitable point in the process of preparing a tax return.
In the second tax calculation, the tax calculation engine ONLY performs those calculations in the tax calculation graph(s) 14 which are changed by the new taxpayer-specific tax data from the calculation performed in the first tax calculation. Thus, this includes only those calculations at nodes along calculation paths 27 which have a direct or indirect data dependency on a node utilizing the new taxpayer-specific tax data and which the value of the node from which it depends is changed from its value obtained by the first tax calculation. Accordingly, if a calculation at a node in the tax calculation graph 40 is not changed by the new taxpayer-specific tax data, the calculation is not performed, because it does need to be performed since it will have no effect on the tax calculations. For instance, if the new taxpayer-specific tax data is related to a particular tax topic, then the calculations in only that tax topic part of the tax calculation graph may need to be calculated, as well as any other parts of the calculation graph changed by a change in that tax topic part of the tax calculation graph. Moreover, if the second calculation of the tax topic part of the tax calculation graph does not result in a change in any value(s) utilized by other parts of the calculation graph, then no other part of the tax calculation graph 14 needs to be calculated during the second tax calculation. The second calculation results in a second calculated tax data, which may include the same types of calculated data as described above for the first calculated tax data.
Describing the second calculation in terms of the calculation paths 27 and nodes 24, 26 and 28, the tax calculation engine 50 only performs those calculation at nodes in the tax calculation graph having a direct or indirect data dependency on a node utilizing the new taxpayer-specific tax data, and which value of the node from which it directly depends is changed from its value obtained by the first tax calculation. Continuing the example using
At step 1220 of method 1210, the tax return preparation system 40 determines whether there is a difference between the first tax calculation data and the second tax calculation data. This can also be considered as a determination whether the new taxpayer-specific tax data causes a change in a result of the tax return, such as the total tax liability, tax return or tax owed. For instance, the first tax calculation data and second tax calculation data may include only the final tax result, such as the an overall tax liability, and a tax refund or tax owed. It may be of interest to the user to know whether the new taxpayer-specific tax data changed the final tax result.
At step 1222, the tax return preparation system 40 identifies a reason why there is, or is not a difference between the first and second tax calculation data, as determined at step 1220. This identification is equivalent to identifying a reason why the new taxpayer-specific tax data causes, or does not cause, a change in the tax return. The system 40 may make this determination by any suitable method, such as by identifying and analyzing the node(s) and/or gist(s) at which there is a change or no change in the value between the first tax calculation and the second tax calculation. Each of these node(s) or gist(s) are referred to as “reason node” or “reason gist”, respectively. As explained above, a node may be interconnected to another node by a “gist” function. The tax return preparation system may use the reason node or reason gist to determine the reason why there is, or is not, a difference between the first and second tax calculation data. For instance, in the example started above, assume there is a gist function interconnecting the node labeled totalInterest to the node 28a in which the gist function requires the totalInterest to be more than a minimum value in order to be included in the taxpayer's AGI (adjusted gross income) (note that this example is hypothetical). Thus, if the totalInterest is less the minimum value, then the new taxpayer-specific tax data comprising 1099 INT data will not be included in the AGI, and there will effectively be no change in the second calculated tax data comprising the total tax liability and tax refund or tax owed, as the case may be. In this case, the system 40 will identify that the reason there was no change in the second calculated data is because the 1009 INT is less than the minimum taxable 1099 INT amount. Conversely, if the totalInterest from the new 1099 INT data exceeds the minimum value, then there will most likely be a change in the second calculated tax data for the total tax liability and the tax refund or tax owed, because the AGI will increase. In this case, the system 40 will identify that the reason for the change in the second calculated tax data is because the 1099 INT exceeds the minimum taxable 1099 INT amount.
At step 1224, the tax return preparation system 40 presents the reason why there is, or is not, a difference between the first and second tax calculation data to the user. Referring again to the example started above, if the new 1099 INT data changes the overall tax liability, this information may be utilized for tax planning purposes, such as the taxpayer investing less in investments producing 1099 INT income and investing in more tax efficient investments. Conversely, if the new 1099 INT data did not change the overall tax liability, the taxpayer may want to invest more in investments earning 1099 INT income, such that the 1099 INT income is expected be at or near the minimum taxable amount.
Referring back to
As seen in
The following pseudo code generally expresses how a rule engine 64 functions utilizing a fact cache based on the runtime canonical data 62 or the instantiated representation of the canonical tax schema 46 at runtime and generating non-binding suggestions 66 provided as an input a UI control 80. As described in U.S. application Ser. No. 14/097,057 previously incorporated herein by reference, data such as required inputs can be stored to a fact cache so that the needed inputs can be recalled at a later time, and to determine what is already known about variables, factors or requirements of various rules:
Rule engine (64)/Tax Logic Agent (TLA) (60)
The TLA 60 may also receive or otherwise incorporate information from a statistical/life knowledge module 70. The statistical/life knowledge module 70 contains statistical or probabilistic data related to the taxpayer. For example, statistical/life knowledge module 70 may indicate that taxpayers residing within a particular zip code are more likely to be homeowners than renters. The TLA 60 may use this knowledge to weight particular topics or questions related to these topics. For example, in the example given above, questions about home mortgage interest may be promoted or otherwise given a higher weight. The statistical knowledge may apply in other ways as well. For example, tax forms often require a taxpayer to list his or her profession. These professions may be associated with transactions that may affect tax liability. For instance, a taxpayer may list his or her occupation as “teacher.” The statistic/life knowledge module 70 may contain data that shows that a large percentage of teachers have retirement accounts and in particular 403(b) retirement accounts. This information may then be used by the TLA 60 when generating its suggestions 66. For example, rather than asking generically about retirement accounts, the suggestion 66 can be tailored directly to a question about 403(b) retirement accounts.
The data that is contained within the statistic/life knowledge module 70 may be obtained by analyzing aggregate tax data of a large body of taxpayers. For example, entities having access to tax filings may be able to mine their own proprietary data to establish connections and links between various taxpayer characteristics and tax topics. This information may be contained in a database or other repository that is accessed by the statistic/life knowledge module 70. This information may be periodically refreshed or updated to reflect the most up-to-date relationships. Generally, the data contained in the statistic/life knowledge module 70 is not specific to a particular tax payer but is rather generalized to characteristics shared across a number of tax payers although in other embodiments, the data may be more specific to an individual taxpayer.
Still referring to
The user interface manager 82, as explained previously, receives non-binding suggestions from the TLA 60. The non-binding suggestions may include a single question or multiple questions that are suggested to be displayed to the taxpayer via the user interface presentation 84. The user interface manager 82, in one aspect of the invention, contains a suggestion resolution element 88, which is responsible for resolving how to respond to the incoming non-binding suggestions 66. For this purpose, the suggestion resolution element 88 may be programmed or configured internally. Alternatively, the suggestion resolution element 88 may access external interaction configuration files. Additional details regarding configuration files and their use may be found in U.S. patent application Ser. No. 14/206,834, which is incorporated by reference herein.
Configuration files specify whether, when and/or how non-binding suggestions are processed. For example, a configuration file may specify a particular priority or sequence of processing non-binding suggestions 66 such as now or immediate, in the current user interface presentation 84 (e.g., interview screen), in the next user interface presentation 84, in a subsequent user interface presentation 84, in a random sequence (e.g., as determined by a random number or sequence generator). As another example, this may involve classifying non-binding suggestions as being ignored. A configuration file may also specify content (e.g., text) of the user interface presentation 84 that is to be generated based at least in part upon a non-binding suggestion 66.
A user interface presentation 84 may be pre-programmed interview screens that can be selected and provided to the generator element 85 for providing the resulting user interface presentation 84 or content or sequence of user interface presentations 84 to the user. User interface presentations 84 may also include interview screen templates, which are blank or partially completed interview screens that can be utilized by the generation element 85 to construct a final user interface presentation 84 on the fly during runtime.
As seen in
Still referring to
Online resources 118 may also be used by the estimation module 110 to provide estimated values. Online resources 118 include, for example, financial services accounts for a taxpayer that can be accessed to estimate certain values. For example, a taxpayer may have one or more accounts at a bank, credit union, or stock brokerage. These online resources 118 can be accessed by the tax preparation software 100 to scrape, copy, or otherwise obtain tax relevant data. For example, online resources 118 may be accessed to estimate the value of interest income earned. A user's linked accounts may be accessed to find all of the interest income transactions that have occurred in the past year. This information may be used as the basis to estimate total interest income for the taxpayer. In another example, online resources 118 may be accessed to estimate the amount of mortgage interest that has been paid by a taxpayer. Instead of waiting for a Form 1098 from the mortgage service provider.
Still referring to
It should also be understood that the estimation module 110 may rely on one or more inputs to arrive at an estimated value. For example, the estimation module 110 may rely on a combination of prior tax return data 116 in addition to online resources 118 to estimate a value. This may result in more accurate estimations by relying on multiple, independent sources of information. The UI control 80 may be used in conjunction with the estimation module 110 to select those sources of data to be used by the estimation module 110. For example, user input 114 will require input by the user of data using a user interface presentation 84. The UI control 80 may also be used to identify and select prior tax returns 116. Likewise, user names and passwords may be needed for online resources 118 and third party information 120 in which case UI control 80 will be needed to obtain this information from the user.
In one embodiment of the invention, the estimated values or other estimated data provided by the estimation module 110 may be associated with one or more attributes 122 as illustrated in
The attributes 122 may also include a confidence level 126 associated with each estimated field. The confidence level 126 is indicative of the level of trustworthiness of the estimated user-specific tax data and may be expressed in a number of different ways. For example, confidence level 126 may be broken down to intervals (e.g., low, medium, high) with each estimated value given an associated label (e.g., L—low, M—medium, H, high). Alternatively, confidence levels 126 may be described along a continuum without specific ranges (e.g., range from 0.0 to 1.0 with 0.0 being no confidence and 1.0 with 100% confidence). The confidence level 126 may be assigned based on the source of the estimated user-specific tax data (e.g., source #1 is nearly always correct so estimated data obtained from this source will be automatically assigned a high confidence level).
In some embodiments, the estimation module 110 may acquire a plurality of estimates from different sources (e.g., user input 1145, prior year tax returns 116, online resources 118, third party information 120) and only write the “best” estimate to the shared data store 42 (e.g., the source with the highest confidence level 126). Alternatively, the estimation module 110 may be configured to ignore data (e.g., sources) that have confidence levels 126 below a pre-determined threshold. For example, all “low” level data from a source may be ignored. Alternatively, all the data may be stored in the shared data store 42 including, for example, the attribute 122 of the confidence level 126 with each entry. The tax calculation engine 50 may ignore data entries having a confidence level below a pre-determined threshold. The estimation module 110 may generate a number of different estimates from a variety of different sources and then writes a composite estimate based on all the information from all the different sources. For example, sources having higher confidence levels 126 may be weighted more than other sources having lower confidence levels 126.
Still referring to
Referring back to
In some embodiments, each estimated value produced by the estimation module 110 will need to be confirmed by the user using the UI control 80. For example, the user interface manager 82 may present estimated data fields to the user for confirmation or verification using a user interface presentation 84. In other embodiments, however, the user may override data using the user interface presentation 84. Some estimated data, for example, data having a high confidence level 126 may not need to be confirmed but can be assumed as accurate.
The confidence level indicator 132 may take a number of different forms, however. For example, the confidence level indicator 132 may be in the form of a gauge or the like that such as that illustrated in
Referring back to
In one embodiment, the gathering or importation of data sources such as prior tax returns 48b, online resources 48c, and third party information 48d is optional. For example, a taxpayer may want to start the process from scratch without pulling information from other sources. However, in order to streamline and more efficiently complete a tax return other users may desire to obtain tax related information automatically. This would reduce the number of interview or prompt screens that are presented to the user if such information were obtained automatically by the tax preparation software 100. A user may be given the opportunity to select which data sources 48 they want accessed and searched for relevant tax related data that will be imported into the shared data store 42. A user may be asked to submit his or her account and password information for some data sources 48 using the UI control 80. Other data sources 48 such as some third party data sources 48d may be accessed without such information.
Next, as seen in operation 1200, after the schema 44 is populated with the various imported or entered data fields from the data sources 48, the tax calculation engine 50, using the calculation graphs 14, reads data from the shared data store 42, performs tax calculations, and writes back data to the shared data store 42. The schema 44 may also be populated with estimates or educated guesses as explained herein using the estimation module 110 as described in the context of the embodiment of
In operation 1300, the tax logic agent 60 reads the run time data 62 which represents the instantiated representation of the canonical tax schema 44 at runtime. The tax logic agent 60 then utilizes the decision tables 30 to generate and send non-binding suggestions 66 to the UI control 80 as seen in operation 1400. Alternatively, the tax logic agent 60 may determine that completeness has been achieved across the tax topics in which case a done instruction may be delivered to the UI control as seen in operation 1500. If not done, the process continues whereby the user interface manager 82 will then process the suggestion(s) 66 using the suggestion resolution element 88 for resolving of how to respond to the incoming non-binding suggestions 66 as seen in operation 1600. The user interface manager 82 then generates a user interface presentation 84 to the user as seen in operation 1700 whereby the user is presented with one or more prompts. The prompts may include questions, affirmations, confirmations, declaratory statements, and the like. The prompts are displayed on a screen 104 of the computing device 102 whereby the user can then respond to the same by using one or more input devices associated with the computing device 102 (e.g., keyboard, mouse, finger, stylus, voice recognition, etc.).
Still referring to
Method embodiments may also be embodied in, or readable from, a computer-readable medium or carrier, e.g., one or more of the fixed and/or removable data storage data devices and/or data communications devices connected to a computer. Carriers may be, for example, magnetic storage medium, optical storage medium and magneto-optical storage medium. Examples of carriers include, but are not limited to, a floppy diskette, a memory stick or a flash drive, CD-R, CD-RW, CD-ROM, DVD-R, DVD-RW, or other carrier now known or later developed capable of storing data. The processor 304 performs steps or executes program instructions 302 within memory 300 and/or embodied on the carrier to implement method embodiments.
Embodiments, however, are not so limited and implementation of embodiments may vary depending on the platform utilized. Accordingly, embodiments are intended to exemplify alternatives, modifications, and equivalents that may fall within the scope of the claims.
This application is a Continuation Application of U.S. application Ser. No. 14/462,345 filed Aug. 18, 2014. The entirety of the above-listed application is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4213251 | Foundos | Jul 1980 | A |
4809219 | Ashford et al. | Feb 1989 | A |
5001006 | Saito et al. | Mar 1991 | A |
5006998 | Yasunobu | Apr 1991 | A |
5019664 | Del Rossi et al. | May 1991 | A |
5082144 | Sundstrom | Jan 1992 | A |
5417596 | Kusakabe | May 1995 | A |
5495607 | Pisello et al. | Feb 1996 | A |
5557761 | Chan et al. | Sep 1996 | A |
5607353 | Hutchings et al. | Mar 1997 | A |
5673369 | Kim | Sep 1997 | A |
5690854 | Brueckner et al. | Nov 1997 | A |
5742836 | Turpin et al. | Apr 1998 | A |
5788412 | Jatkar | Aug 1998 | A |
5813178 | Edwards | Sep 1998 | A |
5819231 | Tremaine et al. | Oct 1998 | A |
5819249 | Dohanich | Oct 1998 | A |
6078898 | Davis | Jun 2000 | A |
6269355 | Grimse et al. | Jul 2001 | B1 |
6334110 | Walter et al. | Dec 2001 | B1 |
6473741 | Baker | Oct 2002 | B1 |
6535883 | Lee et al. | Mar 2003 | B1 |
6601055 | Roberts | Jul 2003 | B1 |
6631361 | O'Flaherty et al. | Oct 2003 | B1 |
6651217 | Kennedy | Nov 2003 | B1 |
6670969 | Halstead et al. | Dec 2003 | B1 |
6690854 | Helbing | Feb 2004 | B2 |
6697787 | Miller | Feb 2004 | B1 |
6850924 | Grimse et al. | Feb 2005 | B2 |
6898573 | Piehl | May 2005 | B1 |
6912508 | McCalden | Jun 2005 | B1 |
6925441 | Jones et al. | Aug 2005 | B1 |
7234103 | Regan | Jun 2007 | B1 |
7295998 | Kulkarni | Nov 2007 | B2 |
7331045 | Martin et al. | Feb 2008 | B2 |
7340679 | Botscheck et al. | Mar 2008 | B2 |
7448022 | Ram et al. | Nov 2008 | B1 |
7539635 | Peak et al. | May 2009 | B1 |
7565312 | Shaw | Jul 2009 | B1 |
7603301 | Regan | Oct 2009 | B1 |
7668763 | Albrecht | Feb 2010 | B1 |
7680756 | Quinn | Mar 2010 | B2 |
7685082 | Coletta | Mar 2010 | B1 |
7693760 | Fiteni | Apr 2010 | B1 |
7693769 | Burlison et al. | Apr 2010 | B1 |
7716094 | Sutter et al. | May 2010 | B1 |
7742958 | Leek | Jun 2010 | B1 |
7747484 | Stanley | Jun 2010 | B2 |
7761333 | Kapp | Jul 2010 | B2 |
7778895 | Baxter | Aug 2010 | B1 |
7805349 | Yu | Sep 2010 | B2 |
7818222 | Allanson et al. | Oct 2010 | B2 |
7849405 | Coletta | Dec 2010 | B1 |
7860763 | Quinn et al. | Dec 2010 | B1 |
7865829 | Goldfield | Jan 2011 | B1 |
7895102 | Wilks et al. | Feb 2011 | B1 |
7899757 | Talan | Mar 2011 | B1 |
7900298 | Char et al. | Mar 2011 | B1 |
7908190 | Enenkiel | Mar 2011 | B2 |
7912767 | Cheatham et al. | Mar 2011 | B1 |
7912768 | Abeles | Mar 2011 | B2 |
7925553 | Banks | Apr 2011 | B2 |
8001006 | Yu et al. | Aug 2011 | B1 |
8019664 | Tifford | Sep 2011 | B1 |
8082144 | Brown et al. | Dec 2011 | B1 |
8086970 | Achtermann et al. | Dec 2011 | B2 |
8108258 | Slattery | Jan 2012 | B1 |
8109499 | Griese et al. | Feb 2012 | B2 |
8126820 | Talan | Feb 2012 | B1 |
8190499 | McVickar | May 2012 | B1 |
8204805 | Eftekhari et al. | Jun 2012 | B2 |
8224726 | Murray | Jul 2012 | B2 |
8234562 | Evans | Jul 2012 | B1 |
8244607 | Quinn | Aug 2012 | B1 |
8306885 | Brose | Nov 2012 | B2 |
8346635 | Olim | Jan 2013 | B1 |
8346680 | Castleman | Jan 2013 | B2 |
8370795 | Sage | Feb 2013 | B1 |
8386344 | Christina | Feb 2013 | B2 |
8407113 | Eftekhari et al. | Mar 2013 | B1 |
8417596 | Dunbar et al. | Apr 2013 | B1 |
8417597 | McVickar | Apr 2013 | B1 |
8447667 | Dinamani et al. | May 2013 | B1 |
8452676 | Talan | May 2013 | B1 |
8473880 | Bennett et al. | Jun 2013 | B1 |
8478671 | Tifford | Jul 2013 | B1 |
8510187 | Dinamani | Aug 2013 | B1 |
8527375 | Olim | Sep 2013 | B1 |
8560409 | Abeles | Oct 2013 | B2 |
8583516 | Pitt et al. | Nov 2013 | B1 |
8589262 | Wang | Nov 2013 | B1 |
8607353 | Rippert | Dec 2013 | B2 |
8635127 | Shaw | Jan 2014 | B1 |
8639616 | Rolenaitis | Jan 2014 | B1 |
8682756 | Tifford et al. | Mar 2014 | B1 |
8682829 | Barthel | Mar 2014 | B2 |
8694395 | Houseworth | Apr 2014 | B2 |
8706580 | Houseworth | Apr 2014 | B2 |
8732057 | Becker | May 2014 | B1 |
8788412 | Hamm | Jul 2014 | B1 |
8812380 | Murray | Aug 2014 | B2 |
8813178 | Khanna | Aug 2014 | B1 |
8838492 | Baker | Sep 2014 | B1 |
8892467 | Ball | Nov 2014 | B1 |
8930253 | Ball | Jan 2015 | B1 |
8949270 | Newton et al. | Feb 2015 | B2 |
9372687 | Pai | Jun 2016 | B1 |
9406089 | Mori | Aug 2016 | B2 |
9690854 | Stent et al. | Jun 2017 | B2 |
9760953 | Wang et al. | Sep 2017 | B1 |
9916628 | Wang et al. | Mar 2018 | B1 |
9922376 | Wang et al. | Mar 2018 | B1 |
9990678 | Cabrera et al. | Jun 2018 | B1 |
10580089 | Mori | Mar 2020 | B2 |
10614526 | Mori | Apr 2020 | B2 |
20020023064 | Grimse et al. | Feb 2002 | A1 |
20020065831 | DePaolo | May 2002 | A1 |
20020107698 | Brown et al. | Aug 2002 | A1 |
20020107824 | Ahmed | Aug 2002 | A1 |
20020111888 | Stanley et al. | Aug 2002 | A1 |
20020174017 | Singh | Nov 2002 | A1 |
20020198832 | Agee | Dec 2002 | A1 |
20030101070 | Mahosky et al. | May 2003 | A1 |
20030126054 | Purcell | Jul 2003 | A1 |
20030139827 | Phelps | Jul 2003 | A1 |
20030174157 | Hellman | Sep 2003 | A1 |
20030182102 | Corston-Oliver | Sep 2003 | A1 |
20040002906 | Von Drehnen et al. | Jan 2004 | A1 |
20040019540 | William | Jan 2004 | A1 |
20040019541 | William | Jan 2004 | A1 |
20040021678 | Ullah et al. | Feb 2004 | A1 |
20040078271 | Morano | Apr 2004 | A1 |
20040083164 | Schwartz | Apr 2004 | A1 |
20040088233 | Brady | May 2004 | A1 |
20040117395 | Gong | Jun 2004 | A1 |
20040172347 | Barthel | Sep 2004 | A1 |
20040181543 | Wu et al. | Sep 2004 | A1 |
20040205008 | Haynie et al. | Oct 2004 | A1 |
20050171822 | Cagan | Aug 2005 | A1 |
20050192823 | Kuhn et al. | Sep 2005 | A1 |
20050216379 | Ozaki | Sep 2005 | A1 |
20050262191 | Mamou et al. | Nov 2005 | A1 |
20060112114 | Yu | May 2006 | A1 |
20060155618 | Wyle | Jul 2006 | A1 |
20060155632 | Cherkas et al. | Jul 2006 | A1 |
20060178961 | Stanley et al. | Aug 2006 | A1 |
20060245369 | Schimmelpfeng | Nov 2006 | A1 |
20060282354 | Varghese | Dec 2006 | A1 |
20060293990 | Schaub | Dec 2006 | A1 |
20070011036 | Lo | Jan 2007 | A1 |
20070033116 | Murray | Feb 2007 | A1 |
20070033117 | Murray | Feb 2007 | A1 |
20070033130 | Murray | Feb 2007 | A1 |
20070055571 | Fox et al. | Mar 2007 | A1 |
20070094207 | Yu et al. | Apr 2007 | A1 |
20070136157 | Neher et al. | Jun 2007 | A1 |
20070150387 | Seubert et al. | Jun 2007 | A1 |
20070156564 | Humphrey et al. | Jul 2007 | A1 |
20070179841 | Agassi | Aug 2007 | A1 |
20070192166 | Van Luchene | Aug 2007 | A1 |
20070250418 | Banks et al. | Oct 2007 | A1 |
20080059900 | Murray | Mar 2008 | A1 |
20080071673 | Howard | Mar 2008 | A1 |
20080097878 | Abeles | Apr 2008 | A1 |
20080126170 | Leck et al. | May 2008 | A1 |
20080141247 | Saravanan | Jun 2008 | A1 |
20080147494 | Larson | Jun 2008 | A1 |
20080162310 | Quinn | Jul 2008 | A1 |
20080177631 | William | Jul 2008 | A1 |
20080215392 | Rajan | Sep 2008 | A1 |
20080243531 | Hyder et al. | Oct 2008 | A1 |
20090024694 | Fong | Jan 2009 | A1 |
20090037305 | Sander | Feb 2009 | A1 |
20090037847 | Achtermann et al. | Feb 2009 | A1 |
20090048957 | Celano | Feb 2009 | A1 |
20090064851 | Morris et al. | Mar 2009 | A1 |
20090112807 | Bahn | Apr 2009 | A1 |
20090117529 | Goldstein | May 2009 | A1 |
20090125618 | Huff | May 2009 | A1 |
20090138389 | Barthel | May 2009 | A1 |
20090150169 | Kirkwood | Jun 2009 | A1 |
20090157572 | Chidlovskii | Jun 2009 | A1 |
20090193389 | Miller | Jul 2009 | A1 |
20090204881 | Murthy | Aug 2009 | A1 |
20090239650 | Alderucci et al. | Sep 2009 | A1 |
20090248594 | Castleman | Oct 2009 | A1 |
20090248603 | Kiersky | Oct 2009 | A1 |
20100036760 | Abeles | Feb 2010 | A1 |
20100057659 | Phelon et al. | Mar 2010 | A1 |
20100063981 | Thomsen | Mar 2010 | A1 |
20100088124 | Diefendori et al. | Apr 2010 | A1 |
20100131394 | Rutsch et al. | May 2010 | A1 |
20100153138 | Evans | Jun 2010 | A1 |
20100161379 | Bene et al. | Jun 2010 | A1 |
20100179916 | Johns et al. | Jul 2010 | A1 |
20110004537 | Allanson et al. | Jan 2011 | A1 |
20110078062 | Kleyman | Mar 2011 | A1 |
20110099063 | Clemmons | Apr 2011 | A1 |
20110145112 | Abeles | Jun 2011 | A1 |
20110173222 | Sayal et al. | Jul 2011 | A1 |
20110225220 | Huang et al. | Sep 2011 | A1 |
20110258195 | Welling | Oct 2011 | A1 |
20110258610 | Aaraj et al. | Oct 2011 | A1 |
20110264569 | Houseworth et al. | Oct 2011 | A1 |
20120005209 | Rinearson et al. | Jan 2012 | A1 |
20120016817 | Smith et al. | Jan 2012 | A1 |
20120027246 | Tifford et al. | Feb 2012 | A1 |
20120030076 | Checco et al. | Feb 2012 | A1 |
20120030577 | Akolkar et al. | Feb 2012 | A1 |
20120072321 | Christian et al. | Mar 2012 | A1 |
20120109792 | Eftekhari et al. | May 2012 | A1 |
20120109793 | Abeles | May 2012 | A1 |
20120136764 | Miller et al. | May 2012 | A1 |
20120173295 | Phelon et al. | Jul 2012 | A1 |
20120215669 | Lieberman et al. | Aug 2012 | A1 |
20120278365 | Labat et al. | Nov 2012 | A1 |
20120296768 | Fremont-Smith et al. | Nov 2012 | A1 |
20120323749 | Lapidus | Dec 2012 | A1 |
20130036347 | Eftekhari et al. | Feb 2013 | A1 |
20130046603 | Grigg et al. | Feb 2013 | A1 |
20130080302 | Allanson et al. | Mar 2013 | A1 |
20130097262 | Dandison | Apr 2013 | A1 |
20130111032 | Alapati et al. | May 2013 | A1 |
20130138586 | Jung et al. | May 2013 | A1 |
20130185347 | Romano | Jul 2013 | A1 |
20130187926 | Silverstein et al. | Jul 2013 | A1 |
20130198047 | Houseworth | Aug 2013 | A1 |
20130218735 | Murray | Aug 2013 | A1 |
20130262279 | Finley et al. | Oct 2013 | A1 |
20130282539 | Murray | Oct 2013 | A1 |
20130290169 | Bathula et al. | Oct 2013 | A1 |
20140108213 | Houseworth | Apr 2014 | A1 |
20140149303 | Band | May 2014 | A1 |
20140172656 | Shaw | Jun 2014 | A1 |
20140201045 | Pai et al. | Jul 2014 | A1 |
20140207633 | Aldrich et al. | Jul 2014 | A1 |
20140241631 | Huang | Aug 2014 | A1 |
20140244455 | Huang | Aug 2014 | A1 |
20140244457 | Howell et al. | Aug 2014 | A1 |
20140337189 | Barsade et al. | Nov 2014 | A1 |
20150142703 | Rajesh | May 2015 | A1 |
20150237205 | Waller et al. | Aug 2015 | A1 |
20150254623 | Velez et al. | Sep 2015 | A1 |
20150269491 | Tripathi et al. | Sep 2015 | A1 |
20160027127 | Chavarria et al. | Jan 2016 | A1 |
20160063645 | Houseworth et al. | Mar 2016 | A1 |
20160071112 | Unser | Mar 2016 | A1 |
20160078567 | Goldman et al. | Mar 2016 | A1 |
20160092993 | Ciaramitaro | Mar 2016 | A1 |
20160092994 | Roebuck et al. | Mar 2016 | A1 |
20160098804 | Mascaro et al. | Apr 2016 | A1 |
20160148321 | Ciaramitaro et al. | May 2016 | A1 |
20160275627 | Wang | Sep 2016 | A1 |
20170004583 | Wang | Jan 2017 | A1 |
20170004584 | Wang | Jan 2017 | A1 |
20170032468 | Wang et al. | Feb 2017 | A1 |
20180032855 | Wang et al. | Feb 2018 | A1 |
Number | Date | Country |
---|---|---|
2002117121 | Apr 2002 | JP |
2005190425 | Jul 2005 | JP |
2014206960 | Oct 2014 | JP |
10-2012-0011987 | Feb 2012 | KR |
WO 2017004094 | Jan 2017 | WO |
WO 2017004095 | Jan 2017 | WO |
WO 2017019233 | Feb 2017 | WO |
WO 2017116496 | Jul 2017 | WO |
WO 2017116497 | Jul 2017 | WO |
WO 2018022023 | Feb 2018 | WO |
WO 2018022128 | Feb 2018 | WO |
WO 2018080562 | May 2018 | WO |
WO 2018080563 | May 2018 | WO |
Entry |
---|
Solomon L. Pollack; Analysis of the Decision Rules in Decision Tables, May 1963; The Rand Corooration; pp. iii, iv, 1, 20, & 24 (Year: 1963). |
Zhang, “Enabling Personalization Recommendation with Weighted FP for Text Information Retrieval Based on User-Focus”, Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC'04), 2004, 5 pages. |
U.S. Appl. No. 16/502,863, filed Jul. 3, 2019. |
OECD (Using Third Party Information Reports to Assist Taxpayers Meet Their Return Filing Obligations—Country Experience with the Use of Pre-Populated Personal Tax Returns) Centre for Tax Policy and Administration, https://www.oecd.org/tax/administration/36280368.pdf), Mar. 2006. |
U.S. Appl. No. 13/923,266, filed Jun. 20, 2013, Pending. |
U.S. Appl. No. 14/462,345, filed Jul. 31, 2014, Abandoned. |
U.S. Appl. No. 14/448,962, filed Jul. 31, 2014, Pending. |
U.S. Appl. No. 14/448,986, filed Jul. 31, 2014, Pending. |
U.S. Appl. No. 14/448,922, filed Jul. 31, 2014, Pending. |
U.S. Appl. No. 14/555,543, filed Nov. 26, 2014, Issued. |
U.S. Appl. No. 14/555,334, filed Nov. 26, 2014, Pending. |
U.S. Appl. No. 14/555,296, filed Nov. 26, 2014, Issued. |
U.S. Appl. No. 14/673,646, filed Mar. 30, 2015, Issued. |
U.S. Appl. No. 14/555,222, filed Nov. 26, 2014, Pending. |
U.S. Appl. No. 14/701,087, filed Apr. 30, 2015, Pending. |
U.S. Appl. No. 16/148,506, filed Oct. 1, 2018, Pending. |
U.S. Appl. No. 16/188,442, filed Nov. 13, 2018, Pending. |
U.S. Appl. No. 16/262,698, filed Jan. 30, 2019, Pending. |
U.S. Appl. No. 16/266,754, filed Feb. 4, 2019, Pending. |
hittp://en.wikipedia.org/wiki/Loose_coupling, printed Mar. 11, 2014, 2 pages. |
hittp://www.webopedia.corniTERM/Uloose_coupling.html, printed Mar. 11, 2014, 4 pages. |
http://doc,jboss.org/drools/release/5.3.0.Final/drools-expert-docs/html/ch01.html, printed Mar. 11, 2014, 10 pages. |
http://en.wikipedia.org/wiki/Declarative_programming, printed Mar. 11, 2014, 4 pages. |
http://en.wikipedia.org/wiki/Drools, printed Mar. 11, 2014, 4 pages. |
http://quicken.intuit.com/support/help/income-and-expenses/how-to-assign-tax-form-line-items-to-a-category/GEN82142.html, updated Aug. 11, 2011, printed Jun. 24, 2014, 2 pages. |
http://quicken.intuit.com/support/help/reports--graphs-and-snapshots/track-the-earnings-taxes--deductions--or-deposits-from-paychecks/GEN82101.html, updated May 14, 2012, printed Jun. 24, 2014, 2 pages. |
http://quicken.intuit.com/support/help/tax-savings/simplify-tax-time/INF24047.html, updated Jul. 25, 2013, printed Jun. 24, 2014, 11 pages. |
http://www.jboss.org/drools/drools-expert.html, printed Mar. 11, 2014, 5 pages. |
https://turbotax.intuit.com/snaptax/mobile/, printed Mar. 11, 2014, 2 pages. |
NY State Dep of Taxation, NY State Personal Income Tax MeF Guide for Software Developers, 2012, NY State, 30 pages. |
OpenRules, Preparing a Tax Return Using OpenRules Dialog, Aug. 2011, 25 pages. |
Wikipedia, https://en.wikipedia.org/wikaree_(data_structure), “Tree (data structure)”, May 15, 2005, 1 page. |
Wikipedia, https://en.wikipedia.org/wiki/Data_structure, “Data Structures”, Jan. 12, 2012, 1 page. |
Vanderbilt University, “Free tax prep help available for Vanderbilt employees”, Feb. 6, 2014, Vanderbilt University, p. 1-3 [NPL-1]. |
http://www.wisegeek.com/what-is-declarative-programming.htm, printed Mar. 11, 2014, 2 pages. |
https://developers.facebook.com/docs/marketing-api/overview, printed Feb. 6, 2015, 5 pages. |
https://developers.facebook.com/docs/marketing-apis, printed Feb. 6, 2015, 3 pages. |
https://developers.facebook.com/docs/marketing-apis/using-the-api, printed Feb. 6, 2015, 2 pages. |
H.R. Gregg; Decision Tables for Documentation and System Analysis; Oct. 3, 1967; Union Carbide Corporation, Nuclear Division, Computing Technology Center: (Year: 1967), 25 pages. |
Solomon L. Pollack; Analysis of the Decision Rules in Decision Tables, May 1963; The Rand Corporation, 78 Pages. |
U.S. Appl. No. 13/923,266, filed Jun. 20, 2013. |
U.S. Appl. No. 14/462,345, filed Jul. 31, 2014. |
U.S. Appl. No. 14/448,962, filed Jul. 31, 2014. |
U.S. Appl. No. 14/448,986, filed Jul. 31, 2014. |
U.S. Appl. No. 14/448,922, filed Jul. 31, 2014. |
U.S. Appl. No. 14/555,543, filed Nov. 26, 2014. |
U.S. Appl. No. 14/555,334, filed Nov. 26, 2014. |
U.S. Appl. No. 14/555,296, filed Nov. 26, 2014. |
U.S. Appl. No. 14/673,646, filed Mar. 30, 2015. |
U.S. Appl. No. 14/555,222, filed Nov. 26, 2014. |
U.S. Appl. No. 14/701,087, filed Apr. 30, 2015. |
U.S. Appl. No. 16/148,506, filed Oct. 1, 2018. |
U.S. Appl. No. 16/188,442, filed Nov. 13, 2018. |
U.S. Appl. No. 16/262,698, filed Jan. 30, 2019. |
U.S. Appl. No. 16/266,754, filed Feb. 4, 2019. |
Number | Date | Country | |
---|---|---|---|
Parent | 14462345 | Aug 2014 | US |
Child | 16154434 | US |