Outcome creation based upon synthesis of history

Information

  • Patent Grant
  • 11687807
  • Patent Number
    11,687,807
  • Date Filed
    Wednesday, June 26, 2019
    4 years ago
  • Date Issued
    Tuesday, June 27, 2023
    10 months ago
  • CPC
  • Field of Search
    • US
    • 705 004000
    • 705 035-045
    • CPC
    • G06Q40/02
    • G06K9/6257
    • G06N20/00
    • G06N5/04
    • G06N5/042
  • International Classifications
    • G06Q40/00
    • G06N5/04
    • G06Q40/02
    • G06N20/00
    • G06F18/214
    • Term Extension
      945
Abstract
A method of exercising effective influence over future occurrences using knowledge synthesis is described. Techniques include influencing methods that yield actions, once a proposed outcome has been assumed. This is different from methods, typically referred to as “predictive” or “prescriptive” that use analytics to model future results based upon existing data and predict most likely outcome. One or more methods of analysis of historical data, in a hierarchical manner, determine events which led to an observed outcome. The outcome-based algorithms use, as input, a future event or state and generate attributes that are necessary precursors. By creating these attributes, the future can be affected. Where necessary, synthetic contributors of such attributes are also created and made to act in ways consistent with generating the assumed outcome. These contributors might be called upon respectively, to post favorable opinions, to report balmy weather, or to describe sales to a certain population demographic.
Description
BACKGROUND
Prior Application

This application is a priority application.


Technical Field

The system, apparatuses and methods described herein generally relate to machine learning techniques, and, in particular, to creating desired outcomes using predictive analytics and the synthesis of events.


Description of the Related Art

Machine learning and artificial intelligence algorithms are seen in the computer science literature for the past half century, with slow progress seen in the predictive analytics realm. We can now take a large data set of various features, and process that learning data set through one of a number of learning algorithms to create a rule set based on the data. This rule set can reliably predict what will occur for a given event. For instance, in a fraud prediction application, with a given event (set of attributes), the algorithm can determine if the transaction is likely to be fraudulent.


Machine learning is a method of analyzing information using algorithms and statistical models to find trends and patterns. In a machine learning solution, statistical models are created, or trained using historical data. During this process, a sample set of data is loaded into the machine learning solution. The solution then finds relationships in the training data. As a result, an algorithm is developed that can be used to make predictions about the future. Next, the algorithm goes through a tuning process. The tuning process determines how an algorithm behaves in order to deliver the best possible analysis. Typically, several versions, or iterations of a model are created in order to identify the model that delivers that most accurate outcomes.


Generally, models are used to either make predictions about the future based on past data, or discover patterns in existing data. When making predictions about the future, models are used to analyze a specific property or characteristic. In machine learning, these properties or characteristics are known as features. A feature is similar to a column in a spreadsheet. When discovering patterns, a model could be used to identify data that is outside of a norm. For example, in a data set containing payments sent from a bank, a model could be used to identify unusual payment activity that may indicate fraud.


Once a model is trained and tuned, it is typically published or deployed to a production or QA environment. In this environment, data is often sent from another application in real-time to the machine learning solution. The machine learning solution then analyzes the new data, compares it to the statistical model, and makes predictions and observations. This information is then sent back to the originating application. The application can use the information to perform a variety of functions, such as alerting a user to perform an action, displaying data that falls outside of a norm, or prompting a user to verify that data was properly characterized. The model learns from each intervention and becomes more efficient and precise as it recognizes patterns and discovers anomalies.


An effective machine learning engine can automate development of machine learning models, greatly reducing the amount of time spent reviewing false positives, call attention to the most important items, and maximizes performance based on real-world feedback.


However, machine learning techniques look to the past to predict the future. They are passive algorithms, incapable of creating an action. What if, given a learning data set, one wanted to create a certain outcome? Present teachings on machine learning fail to disclose how to use a machine learning data set to create a desired outcome.


BRIEF SUMMARY OF THE INVENTION

A method for creating a desired outcome is described herein. The method is made up of the steps of (1) inputting the desired outcome on a computer; (2) sending the desired outcome to a machine learning server over a network; (3) parsing rules in a machine learning model to determine a set of necessary-past attributes for creating the desired outcome, on the machine learning server; (4) filtering the set of necessary-past attributes through a list of synthetic features to create synthetic contributors; (5) determining the synthetic contributors required to create the desired outcome; and (6) outputting the synthetic contributors.


In some embodiments, the method also includes the step of creating the machine learning model by operating a training module on a machine learning database. The method could also include the step of creating the desired outcome by automatically taking action to implement the synthetic contributors. In some cases, the desired outcome relates to banking. And the synthetic contributors could include information about bank accounts. In some embodiments the synthetic contributors are output to the computer and in others the synthetic contributors are output to software on the machine learning server. The parsing of the rules could include reverse engineering of the machine learning model. The list of synthetic features could be machine generated. The method could also include the step of locating the desired outcome in a set of machine learning data and creating a dataset for the machine learning model with data proximate to the desired outcome.


A device for creating a desired outcome is also described in this document. The device is made up of a machine learning database electrically connected to a special purpose machine learning server, where the special purpose machine learning server has a list of synthetic features. The special purpose machine learning server accepts an input of the desired outcome, and sends the desired outcome to an outcome creation engine. The outcome creation engine parses the rules of a machine learning model to derive a set of necessary-past attributes. The set of necessary-past attributes are filtered through the list of synthetic features to determine synthetic contributors required to create the desired outcome.


The machine learning model could be created by operating a training module on the machine learning database. In some embodiments, the desired outcome is created by automatically taking action to implement the synthetic contributors. In some cases, the desired outcome relates to banking. And the synthetic contributors could include information about bank accounts. In some embodiments the synthetic contributors are output to the computer and in others the synthetic contributors are output to software on the machine learning server. In some cases, the rules of the machine learning model are reverse engineered to derive the set of necessary-past attributes.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of the component for outputting the attributes required for creating the desired outcome.



FIG. 2 is a block diagram showing the loop of the environment is impacted by the creations of the desired outcome.



FIG. 3 is a diagram of one possible hardware environment for operating the outcome creation engine.



FIG. 4 is a flow chart of the outcome creation process.



FIG. 5 is a chart of a banking example of finding a necessary past.





DETAILED DESCRIPTION

The inventions herein relate to exercising effective influence over future occurrences using knowledge synthesis. Techniques have been invented and combined with certain others to achieve innovative influencing methods that yield actions 206, once a proposed outcome 106 has been assumed. This is different from methods, typically referred to as “predictive” or “prescriptive” that use analytics to model future results 103 based upon existing data 102 and predict most likely outcome 105 based upon natural or planned actions: “if-then” results. The inventions use one or more established methods of analysis of historical data 102 to determine, in a hierarchical manner, the events which led to an observed outcome. The outcome based algorithms 107 then use, as input, a cast future event or state 106 and generate a set of attributes 108, 202 that are necessary precursors. By creating these attributes 202, or the appearance of their creation, the future can be affected. Where necessary, synthetic contributors 205 of such attributes are also created and made to act in ways consistent with generating the assumed outcome 106. Examples of synthetic contributors 204 are composite personalities, fictitious physical sensors and undocumented sales channels. These contributors might be called upon respectively, to post favorable opinions, to report balmy weather, or to describe sales to a certain population demographic as being “off the charts.” These synthesized data 205 make up the history, or “necessary-past,” resulting in the outcome. The outcome occurs in the environment 201.


An outcome is the final state of sentiment, ranking and/or binary result, from all of those possible, in the future. Influence is information, action and/or expression of sentiment that determines an outcome. Typically, historical data 102 is used to predict an outcome based upon measurable attributes leading to a foreseeable state. Conversely, if an outcome is assumed or created, then specific attributes necessary, referred to herein after as a necessary-past, can be generated synthetically and applied to affect the result.


Sheep are often influenced to an outcome by very few attributes. One well trained dog can herd several hundred sheep in a desired manner using three attributes, barking, nipping and position/motion. The herd could easily overwhelm a single dog with only barking, nipping and running as attributes at his disposal. The dog learns from previous data, which sheep to move first and the attribute most likely to influence the motion. Having moved very few of the herd, the dog then relies upon their influence upon each other to get the entire group moving in the desired direction, leaving him to only deal with outliers. The dog has created the future as opposed to forecasting it.


Supervised classification of data and patterns is used to identify attributes that influence historical outcomes, in context. By example, if a Company has released a new candy bar and determines that the outcome is that consumers like it, attributes will be determined from analysis of data from a different, but similar product release where the result was that the product was liked. This modeling, used to determine the influencing attributes, is done using best-known-methods in data aggregation, mining and analysis.


A synthetic influencer, according to the invention, can effectively cause a future occurrence by offering specific attributes necessary to exist prior to the outcome. FIG. 2 illustrates this process; a candy maker prefers that social media comments reflect that a new bar is well liked and this is identified as the future occurrence, “bar is well liked”. This well liked outcome is the desired outcome 106. Data 102 is gathered and used in FIG. 2 to show that several attributes are necessary to achieve this outcome 106; using reaction to prior and competing products, a map of influencers is identified along with their characteristics, a relatively small number of whom were likely required to move a vast number of followers to an equivalent conclusion and expression of sentiment. The algorithm then assigns synthetic members from its database to express attribute values that are designed to move the required volume of sentiment in the direction consistent with the assumed outcome.


Examples of practical applications for creation of the future by generating the necessary-past and contributing synthesized attributes include:

  • Successful product launch;
  • Positive reputation;
  • Increased awareness;
  • Candidate election;
  • Stock sentiment;
  • Tourism increase;
  • Web site traffic;
  • Redirection of resources.


The core algorithms of a future creating inventions build a synthetic history of attributes necessary for the achievement of the future state. Further, the algorithm creates and assigns influencers, based upon learned characteristics, to implement the synthetic history 206. In other words, for the future to have a specified, as opposed to an observed or predicted, state, certain things must have occurred prior to the outcome; the algorithm determines those things necessary to create the future and directs the deployment thereof.


Creating the necessary-past 202 is done using available analysis algorithms and techniques 103, 107 which are scored for the type of data available. The attributes of the necessary-past are determined and scored or weighted as required by context and then shaped; such results are gained through the analysis of historical data 102 from physical sensors, sentiment data from social media, collected sales/revenue figures, voter preference, polls etc.


From the analysis above, algorithms are used to discover specific influencers along with the weighted effect of such influence 205. Characteristics such as demographics, persuasions, climate, top sellers, etc. are described. Synthetic influencers are then selected or constructed to output data consistent with the future as cast. The result can be described as a “then-if” solution.


Finally, the invention directs the deployment of the synthetic influencers described above, ordered in time and adjusted in magnitude, as determined by internal algorithms 206. The outcome is tested and influence or influencers may or may not be modified or substituted.


Current prescriptive analytic techniques present “if-then” results. Such results rely upon the observed past to test the result of one or multiple occurrences and determine a likely outcome.


There are four fundamental and multiple secondary components to the algorithm:

  • The first basic component is a means and method to Extract, Transform and Load (ETL) historical data 102 from disparate sources, in terms of format, location, etc 201;
  • The second component uses predictive analytic algorithms 101 to generate models 103 from the data 102 and to identify attributes as part of a necessary-past 202 of the outcome;
  • The third component allows the future outcome to be entered in to the algorithm and generates the values, in their native format, for the attributes of a necessary-past 205;
  • The fourth fundamental component of the algorithm identifies the synthetic offerers from its internal database that will contribute the attributes.


A major secondary component of the inventions is the database of synthetic contributors 204 whose characteristics result in very few being required to create and demonstrate a necessary past. Other secondary components of the invention include a means to build the synthetic contributors using knowledge derived from examination of, assumed-real contributors identified as being necessary influencers in the previously described historical models.


The inputs to the algorithm are therefore, the future as cast, historical data 102 collected that yielded a contextually similar result and the output is a necessary-past, its attributes and the contributors thereof.


Looking to FIG. 1, a block diagram is shown for a traditional machine learning predictive analytics application. To this application, the output creation features are added. In a traditional predictive analytics system, a training module 101 operates on a training dataset 102 to create a machine learning model 103. The predictive analytics application 105 calls the machine learning engine 104 with a specific set of event data. The machine learning engine 104 processes the event data through the machine learning model 103 to predict what result will occur with the specific event data.


In the outcome creation portion of the system starts with the input of the desired outcome 106. Say a banker wants to increase the number of short term business loans. The system is asked to increase business loans as the outcome. This outcome is sent to the outcome creation engine 107. The outcome creation engine 107 reverse engineers the machine learning model 103 to identify the feature drivers of the model 103. Say the model 103 has rules that if a bank customer has a checking account with the bank and uses the debit card and has a balance of less than $20,000 in the checking account, then the customer is likely to ask for a short term loan. The outcome creation engine 107 parses the rules of the machine learning model 103, and returns checking account, debit card, and balance less than $20,000.


The bank cannot control the amount of money in the account, so this is an uncontrolled feature 203. But the bank can influence the presence of a checking account and the use of a debit card, perhaps by offering discounts or increased advertising. So the output creation engine 107 outputs 108 1) the presence of a checking account and 2) use of debit card. In some embodiments, the balance less than $20,000 is also output 108. The output 108 is the transfer of data to other software in some embodiments and in other embodiments the output 108 may be displayed on a screen, perhaps on the computer 301. In other embodiments, the output 108 causes actions to be taken 109. In some embodiments, the server 303 matches the required attributes 108 to a table of actions to take to cause the required attributes. For instance, this could include lowering the price when increased sales are desired. Or buying products to cause a price to increase. All of these actions 109 are automated within the server.


Once the actions 109 are taken, the desired outcome 110 is effectuated.



FIG. 2 shows a broader view of embodiments of the present inventions. The environment 201 is the context upon which the machine learning operates and upon which the algorithm seeks to impact with a desired output 106. In our example above, the environment 201 is the banking market. From the market environment 201, historical data 102 is collected. This historical data 102 could include features such as customer name, address, the types of accounts and loans that the customer has, balances, etc. This historical data 102 is run through the training module 101 to build the machine learning model 103. The machine learning model 103 and the desired outcome 106 are fed into the outcome creation engine 107 to determine what is needed to generate the set of necessary-past attributes 202. Returning to our example, the machine learning model 103 may determine that it uses the presence of a checking account, the use of a debit card, and a balance of less than $20,000 to predict if a customer will request a short term loan. The outcome creation engine 107 determines that checking account, debit card, and balance less than $20,000 are needed to generate the set of necessary-past attributes 202. The set of necessary-past attributes 202 looks to the two features lists, one list of synthetic features 204 and the second list of uncontrolled features 203. The set of necessary-past attributes 202 are filtered through the list of synthetic features 204. In some embodiments, these features lists 203, 204 are maintained by the machine using machine learning techniques. In other embodiments, these lists 203, 204 are entered by the banker or those setting up the system.


The determine synthetic contributors function 205 then take the features that are controllable by the bank, in this case the checking account and of the debit card, and determines the required attributes. In this case, the algorithm looks for the presence of these features. In other situations, it could be certain values of a feature. In some embodiments, the controllable features are tested to see if they have a determinative effect, and are not overridden by the uncontrolled features.


The list of desired attributes for the controllable features are then either automatically sent to cause an action 206 or sent to a human for implementation. In our example, the debit card could be automatically sent out to existing customers with checking accounts and balances less than $20,000 to setup the attributes for the desired outcome. However, the opening of the checking account may require the bank to design and implement an advertising campaign to bring in more checking accounts. Each of these actions 206 will impact the environment 201.


Because of the complexities of machine learning algorithms, special purpose computing may be needed to build and execute the machine learning model described herein. FIG. 3 shows one such embodiment. The user enters the desired outcome 106 and perhaps views the output 108 described here on a personal computing device such as a personal computer, laptop, tablet, smart phone, monitor, or similar device 301. The personal computing device 301 communicates through a network 302 such as the Internet, a local area network, or perhaps through a direct interface to the server 303. The special purpose machine learning server 303 is a high performance, multi-core computing device with significant storage facilities 304 in order to store the machine learning training data 102 for the model 103. Since this machine learning database 102 is continuously updated in some embodiments, this data must be kept online and accessible so that it can be updated. In addition, the real-time editing of the model 103 as the user provides feedback to the model 103 requires significant processing power to rebuild the model as feedback is received. The server 303 is a high performance computing machine electrically connected to the network 302 and to the storage facilities 304.


Looking to FIG. 4, a flowchart is shown creating a desired outcome. The process begins by identifying the desired outcome 401. Knowing what is desired, the process next looks through the data for an existing outcome in history 402. In this step 402, the exact occurrence is sought in the data.


In the example in FIG. 5, the desired outcome is to increase the number of Real Time Payments made by a customer. In this example, we assume that the typical customer does about 90% of their transactions as Automated Clearing House (ACH) payments, typically for a nominal or no cost. About 10% of the payments are done with wire transfers, for a substantial cost per transaction (maybe $25 per wire). A new payment method called Real Time Payment (RTP) is introduced at a moderate price, perhaps $2 per transaction. The desired outcome 401 is to transition customers from ACH to RTP payments. In FIG. 5, a customer has been identified who converted a significant amount of their business from ACH to RTP. In one embodiment, this customer was found by the server 303 by searching for customers who have more RTP payments than ACH payments. In some embodiments, rather than a single customer an aggregate of a plurality of customers could be charted. In some embodiments this aggregation is simply time based, and in other embodiments, the time is shifted to align the desired outcomes.


In the chart on FIG. 5, the change from no RTP payments to many RTP payments started in November or December 2019. So the server 303 identifies the occurrence of the desired outcome 402 as December 2019, by comparing the RTP data point to see where a sharp increase occurs.


The next step in FIG. 4 is to create a data set 403 for a period of time leading up to the desired change. This time period could be a parameter set by a user or it could be determined by repeatedly creating models 404 and testing the results to see if an interesting result is produced. After the data set 403 is determined, the necessary part is modeled 404 using traditional machine learning techniques used for predictive analytics.


In our example in FIG. 5, the one year period is selected, and the data set 403 is marked as from January 2019 to December 2019. A machine learning model is run on the 2019 data, looking at the number of ACH payments, the number of wire payments, and the number of RTP payments. The machine learning model 404 notices that in the six months before the desired outcome on December 2019, the number of ACH payments decreased and the number of wire payments increased. Essentially, the model notices that before the RTP payments were used by the customer, the customer started switching to a greater percentage of wire transfers for 5-6 months, and then the customer moved to RTP payments.


In FIG. 4, the process then identifies the actions to take to match the outcome 405, effectuating the desired outcome 406.


In the example in FIG. 5, the server 303 identifies that an increase in wire transfers will cause the customer to consider RTP as a method of payment. The server 303 then searches a list of possible actions for an action that will increase wire transfers. For example, the server 303 may determine that a significant sale on wire transfers may cause a change in the mix of wires and ACH payments. By significantly discounting wire transfers, perhaps to several dollars for a temporary sale, the customer switches over a number of payments from ACH to wire. Then when the wire transfer sale ends, the bank recommends that RTP payments be used instead of wires. This will match the chart in FIG. 5 for another customer.


The foregoing devices and operations, including their implementation, will be familiar to, and understood by, those having ordinary skill in the art.


The above description of the embodiments, alternative embodiments, and specific examples, are given by way of illustration and should not be viewed as limiting. Further, many changes and modifications within the scope of the present embodiments may be made without departing from the spirit thereof, and the present invention includes such changes and modifications.

Claims
  • 1. A method for identifying synthetic contributors required to generate a desired outcome, the method comprising: creating a machine learning model by operating a training module on a machine learning database on a machine learning server;receiving the desired outcome at the machine learning server over a network from a computer;reverse engineering the machine learning model by parsing rules in the machine learning model using the desired outcome to determine a set of necessary-past attributes for generating the desired outcome, on the machine learning server, where the necessary-past attributes are data and pattern attributes that need to be present when the machine learning model is run to generate the desired outcome;filtering the set of necessary-past attributes through a list of synthetic features to identify the synthetic contributors required to generate the desired outcome, wherein the synthetic features are features input to the machine learning model that can be controlled and the synthetic contributors are presence or value attributes of the machine learning model that can be made to act in a way to generate the desired outcome; andoutputting the synthetic contributors.
  • 2. The method of claim 1 wherein the desired outcome relates to web site traffic.
  • 3. The method of claim 1 further comprising: generating the desired outcome by automatically taking action to implement the synthetic contributors.
  • 4. The method of claim 1 wherein the desired outcome relates to banking.
  • 5. The method of claim 4 wherein the synthetic contributors include information about bank accounts.
  • 6. The method of claim 1 wherein the synthetic contributors are output to the computer.
  • 7. The method of claim 1 wherein the synthetic contributors are output to software on the machine learning server.
  • 8. The method of claim 1 wherein the list of synthetic features is machine generated.
  • 9. The method of claim 1 further comprising locating the desired outcome in a set of machine learning data and creating a dataset for the machine learning model with data proximate to the desired outcome.
  • 10. A device for identifying synthetic contributors required to generate a desired outcome, the device comprising: a special purpose machine learning server;a machine learning database electrically connected to the special purpose machine learning server; anda list of synthetic features stored in the special purpose machine learning server, where the synthetic features are features input to a machine learning model that can be controlled;wherein the special purpose machine learning server creates the machine learning model by operating a training module on the machine learning database on the machine learning server, accepts an input of the desired outcome, and uses the desired outcome in conjunction with an outcome creation engine to reverse engineer the machine learning model by parsing rules of the machine learning model using the desired outcome to derive a set of necessary-past attributes, the set of necessary-past attributes filtered through the list of synthetic features to identify the synthetic contributors required to generate the desired outcome, where the necessary-past attributes are data and pattern attributes that need to be present when the machine learning model is run to generate the desired outcome and the synthetic contributors are presence or value attributes of the machine learning model that can be made to act in a way to generate the desired outcome.
  • 11. The device of claim 10 wherein the desired outcome relates to web site traffic.
  • 12. The device of claim 10 wherein the desired outcome is generated by automatically taking action to implement the synthetic contributors.
  • 13. The device of claim 10 wherein the desired outcome relates to banking.
  • 14. The device of claim 13 wherein the synthetic contributors includes information about bank accounts.
  • 15. The device of claim 10 wherein the synthetic contributors are shared with other software on the special purpose machine learning server.
  • 16. The device of claim 10 wherein the synthetic contributors are output on a display.
  • 17. The device of claim 10 wherein the list of synthetic features is machine generated.
  • 18. A device for generating a desired outcome, the device comprising: a special purpose machine learning server; anda machine learning database electrically connected to the special purpose machine learning server;wherein the special purpose machine learning server comprises: a means for creating a machine learning model by operating a training module on the machine learning database on the machine learning server; anda means for generating the desired outcome using the machine learning model.
US Referenced Citations (149)
Number Name Date Kind
5262942 Earle Nov 1993 A
5729594 Klingman Mar 1998 A
5809483 Broka et al. Sep 1998 A
5815657 Williams et al. Sep 1998 A
5875108 Hoffberg et al. Feb 1999 A
5890140 Clark et al. Mar 1999 A
5913202 Motoyama Jun 1999 A
5970482 Pham et al. Oct 1999 A
6023684 Pearson Feb 2000 A
6105012 Chang et al. Aug 2000 A
6141699 Luzzi et al. Oct 2000 A
6151588 Tozzoli et al. Nov 2000 A
6400996 Hoffberg et al. Jun 2002 B1
6505175 Silverman et al. Jan 2003 B1
6523016 Michalski Feb 2003 B1
6675164 Kamath et al. Jan 2004 B2
6687693 Cereghini et al. Feb 2004 B2
6708163 Kargupta et al. Mar 2004 B1
6856970 Campbell et al. Feb 2005 B1
7092941 Campos Aug 2006 B1
7308436 Bala et al. Dec 2007 B2
D593579 Thomas Jun 2009 S
7617283 Aaron et al. Nov 2009 B2
7716590 Nathan May 2010 B1
7720763 Campbell et al. May 2010 B2
7725419 Lee et al. May 2010 B2
7805370 Campbell et al. Sep 2010 B2
8229875 Roychowdhury Jul 2012 B2
8229876 Roychowdhury Jul 2012 B2
8429103 Aradhye et al. Apr 2013 B1
8776213 McLaughlin et al. Jul 2014 B2
8990688 Lee et al. Mar 2015 B2
9003312 Ewe et al. Apr 2015 B1
9405427 Curtis et al. Aug 2016 B2
D766292 Rubio Sep 2016 S
D768666 Anzures et al. Oct 2016 S
9489627 Bala Nov 2016 B2
9537848 McLaughlin et al. Jan 2017 B2
D780188 Xiao et al. Feb 2017 S
D783642 Capela et al. Apr 2017 S
D784379 Pigg et al. Apr 2017 S
D784381 McConnell et al. Apr 2017 S
9667609 McLaughlin et al. May 2017 B2
D785657 McConnell et al. May 2017 S
D790580 Hatzikostas Jun 2017 S
D791161 Hatzikostas Jul 2017 S
D803238 Anzures et al. Nov 2017 S
9946995 Dwyer et al. Apr 2018 B2
D814490 Bell Apr 2018 S
10262235 Chen et al. Apr 2019 B1
D853424 Maier et al. Jul 2019 S
10423948 Wilson et al. Sep 2019 B1
D871444 Christiana et al. Dec 2019 S
D872114 Schuster Jan 2020 S
D873277 Anzures et al. Jan 2020 S
D878385 Medrano et al. Mar 2020 S
D884003 Son et al. May 2020 S
D900847 Saltik Nov 2020 S
D901528 Maier et al. Nov 2020 S
D901530 Maier et al. Nov 2020 S
D902957 Cook et al. Nov 2020 S
10924514 Altman et al. Feb 2021 B1
11003999 Gil et al. May 2021 B1
20020016769 Barbara et al. Feb 2002 A1
20020099638 Coffman et al. Jul 2002 A1
20020118223 Steichen et al. Aug 2002 A1
20020135614 Bennett Sep 2002 A1
20020138431 Antonin et al. Sep 2002 A1
20020188619 Low Dec 2002 A1
20020194159 Kamath et al. Dec 2002 A1
20030041042 Cohen et al. Feb 2003 A1
20030093366 Halper et al. May 2003 A1
20030184590 Will Oct 2003 A1
20030212629 King Nov 2003 A1
20030220844 Marnellos et al. Nov 2003 A1
20030233305 Solomon Dec 2003 A1
20040034558 Eskandari Feb 2004 A1
20040034666 Chen Feb 2004 A1
20050010575 Pennington Jan 2005 A1
20050154692 Jacobsen et al. Jan 2005 A1
20050027645 Lui et al. Feb 2005 A1
20050171811 Campbell et al. Aug 2005 A1
20050177495 Crosson Smith Aug 2005 A1
20050177504 Crosson Smith Aug 2005 A1
20050177521 Crosson Smith Aug 2005 A1
20060015822 Baig et al. Jan 2006 A1
20060080245 Bahl et al. Apr 2006 A1
20060101048 Mazzagatti et al. May 2006 A1
20060190310 Gudla et al. Aug 2006 A1
20060200767 Glaeske et al. Sep 2006 A1
20060265662 Gertzen Nov 2006 A1
20070156673 Maga et al. Jul 2007 A1
20070266176 Wu Nov 2007 A1
20080104007 Bala Mar 2008 A1
20080091600 Egnatios et al. Apr 2008 A1
20080228751 Kenedy et al. Sep 2008 A1
20090150814 Eyer et al. Jun 2009 A1
20090192809 Chakraborty et al. Jul 2009 A1
20090240647 Green et al. Sep 2009 A1
20090307176 Jeong et al. Dec 2009 A1
20100066540 Theobald et al. Mar 2010 A1
20110302485 D'Angelo et al. Dec 2011 A1
20120041683 Vaske et al. Feb 2012 A1
20120054095 Lesandro et al. Mar 2012 A1
20120072925 Jenkins et al. Mar 2012 A1
20120197795 Campbell et al. Aug 2012 A1
20120290379 Hoke et al. Nov 2012 A1
20120290382 Martin et al. Nov 2012 A1
20120290474 Hoke Nov 2012 A1
20120290479 Hoke et al. Nov 2012 A1
20130054306 Bhalla et al. Feb 2013 A1
20130071816 Singh et al. Mar 2013 A1
20130110750 Newnham et al. May 2013 A1
20130211937 Elbirt et al. Aug 2013 A1
20130219277 Wang et al. Aug 2013 A1
20130231974 Harris et al. Sep 2013 A1
20130339187 Carter Dec 2013 A1
20140121830 Gromley et al. May 2014 A1
20140129457 Peeler May 2014 A1
20140143186 Bala May 2014 A1
20140241609 Vigue et al. Aug 2014 A1
20140244491 Eberle et al. Aug 2014 A1
20140258104 Harnisch Sep 2014 A1
20140279484 Dwyer et al. Sep 2014 A1
20140317502 Brown et al. Oct 2014 A1
20150363801 Ramberg et al. Dec 2015 A1
20160011755 Douek et al. Jan 2016 A1
20160086185 Adjaoute Mar 2016 A1
20160164757 Pape Jun 2016 A1
20160232546 Ranft et al. Aug 2016 A1
20160292170 Mishra et al. Oct 2016 A1
20170019490 Poon et al. Jan 2017 A1
20180005323 Grassadonia Jan 2018 A1
20180025140 Edelman et al. Jan 2018 A1
20180032908 Nagaraju et al. Feb 2018 A1
20180075527 Nagla et al. Mar 2018 A1
20180218453 Crabtree et al. Aug 2018 A1
20180285839 Yang et al. Oct 2018 A1
20180337770 Bathen et al. Nov 2018 A1
20180349446 Triolo et al. Dec 2018 A1
20190122149 Caldera et al. Apr 2019 A1
20190205977 Way et al. Jul 2019 A1
20190213660 Astrada et al. Jul 2019 A1
20190236598 Padmanabhan Aug 2019 A1
20190258818 Yu et al. Aug 2019 A1
20190340847 Hendrickson et al. Nov 2019 A1
20200119905 Revankar et al. Apr 2020 A1
20200151812 Gil et al. May 2020 A1
20210125272 Sinharoy Apr 2021 A1
Foreign Referenced Citations (6)
Number Date Country
2003288790 Mar 2005 AU
2756619 Sep 2012 CA
3376361 Sep 2018 EP
57-101980 Jun 1982 JP
2019021312 Jan 2019 WO
2020068784 Apr 2020 WO
Non-Patent Literature Citations (22)
Entry
AcceptEasy, “In-Chat Payments”, webpage downloaded from https://www.accepteasy.com/en/in-chat-payments on Mar. 21, 2019.
AcceptEasy > Public, webpage downloaded from https://plus.google.com/photos/photo/104149983798357617844/6478597596568340146 on Mar. 21, 2019.
Bansal, Nikhil, Avrim Blum, and Shuchi Chawla. “Correlation clustering.” Machine Learning 56.1-3 (2004): 89-113.
Bloomberg Terminal, Wikipedia, Feb. 20, 2021, webpage downloaded from https://en.wikipedia.org/wiki/Bloomberg_Terminal on Apr. 27, 2021.
Data mining to reduce churn. Berger, Charles. Target Marketing; Philadelphia 22.8 (Aug. 1999): 26-28.
Eckoh, “ChatGuard, The only PCI DSS compliant Chat Payment solution”, webpage downloaded from https://www.eckoh.com/pci-compliance/agent-assisted-payments/ChatGuard on Mar. 25, 2019.
Finley, Thomas, and Thorsten Joachims. “Supervised clustering with support vector machines.” Proceedings of the 22nd international conference on Machine learning, ACM, 2005.
Hampton, Nikolai, Understanding the blockchain hype: Why much of it is nothing more than snake oil and spin, Computerworld, Sep. 5, 2016.
Khalid, “Machine Learning Algorithms 101”, Feb. 17, 2018 https://towardsml.com/2018/02/17/machine-learning-algorithms-101/ (Year: 2018).
Live chat Screenshots, mylivechat, webpage downloaded May 17, 2021 from https://mylivechat.com/screenshots.aspx.
LiveChat, “Pagato”, webpage downloaded from https://www.livechatinc.com/marketplace/apps/pagato/ on Mar. 21, 2019.
ManyChat, “[UPDATED] Accept Payments Directly in Facebook Messenger Using ManyChat!”, webpage downloaded from https://blog.manychat.com/manychat-payments-using-facebook-messenger/ on Mar. 21, 2019.
McConaghy, Trent, Blockchains for Artificial Intelligence, Medium, Jan. 3, 2017.
Meia et al., Comparing clusterings-an information based distance, Journal of Multivariate Analysis 98 (2007) 873-895.
Mining for gold. Edelstein, Herb. InformationWeek; Manhasset 627 (Apr. 21, 1997): 53-70.
Pagato, “Accept Payments via Chat”, webpage downloaded from https://pagato.com/ on Mar. 21, 2019.
PCIpal, “SMS & Web Chat”, webpage downloaded from https://www.pcipal.com/us/solutions/sms-web-chat/ on Mar. 21, 2019.
The Clearing House, “U.S. Real-Time Payments Technology Playbook”, Version 1.0, Nov. 2016.
Traffic Analysis of a Web Proxy Caching Hierarchy. Anirban Mahanti et al., University of Saskatchewan, IEEE Network, May/Jun. 2000.
Uhingran, Anant, “Moving toward outcome-based intelligence”, IDG Contributor Network, Mar. 19, 2018.
James Wilson, “How to take your UX to a new level with personalization”, Medium blog, Oct. 24, 2018, found at https://medium.com/nyc-design/how-to-take-your-ux-to-a-new-level-with-personalization-1579b3ca414c on Jul. 29, 2019.
“Distributed Mining of Classification Rules”, By Cho and Wuthrich, 2002 http://www.springerlink.com/(21nnasudlakyzciv54i5kxz0)/app/home/contribution.asp?referrer=parent&backto=issue,1,6;journal,2,3,31;linkingpublicationresults,1:105441,1.