Technical Field
This invention relates generally to the field of service modules. More specifically, this invention relates to a data and prediction driven methodology for facilitating customer interaction.
Description of the Related Art
When a customer desires to purchase a good or service, there are a variety of interaction channels for receiving the customer's order that are along a spectrum of human-guided interactions. On one end of the spectrum, a customer can order goods or services from a human over the telephone. The other end of the spectrum includes completely automated services such as a website for receiving customer orders. Along this spectrum are mixed human and automatic interactions. For example, a customer can place an order on a website while chatting with a human over the Internet. In addition, the user can email a human with order information and the human inputs the order into an automated system.
The type of interaction is a function of customer preference, the cost of servicing the customer, and the lifetime value of the customer to the company. For example, the cost of providing a human to receive orders is more expensive than self-service over a website, but if the customer would not otherwise purchase the good or service, the company still profits.
The cost of interacting with customers could be reduced if there were a way to categorize and thus predict customer behavior. For example, the process would be more efficient if there were a way to predict that a particular customer is difficult and, as a result, assign the customer to an agent with experience at diffusing an irate customer, which would result in a happier customer and a shorter interaction time.
In one embodiment, the invention comprises a method and/or an apparatus that enhances customer experience, reduces the cost of servicing a customer, predicts and prevents customer attrition by predicting the appropriate interaction channel through analysis of different types of data and filtering of irrelevant data, and predicts and enhances customer growth.
Customer experience is enhanced by empowering the customer to choose an appropriate interaction channel, exploiting the availability of web-based self-service wizards and online applications; reducing the resolution time by preserving the service context as a customer moves from one channel to another; and using auto-alerts, reminders, and auto-escalations through the web-based service portal. The cost of servicing a customer is reduced by deflecting service requests from a more expensive to a less expensive channel, increasing the first time resolution percentage, and reducing the average handle time. The customer attrition is predicted and prevented by offering the most appropriate interaction channel, ensuring that the service request is handled by the right agent. Customer growth is achieved by targeting the right offer to the right customer at the right time through the right channel.
In one embodiment, the invention comprises a method and/or an apparatus for generating models to predict customer behavior.
Data Sources
The predictive engine 125 requires information to generate accurate predictions. In one embodiment, the information is limited to customer interactions derived from one company's sales. If the information is derived from a large company, e.g. Amazon.com®, the data set is large enough to make accurate predictions. Smaller companies, however, lack sufficient customer data to make accurate predictions.
In one embodiment, a customer interaction data engine 117 receives data categorized as a problem dimension 100, a product dimension 105, a customer dimension 110, and an agent dimension 115. This data comes from a variety of sources such as customer history, call by call transactions, etc.
The problem dimension 100 contains data relating to any customer interaction arising from a problem with a product or service. In one embodiment, the problem dimension 100 comprises a unique identification number associated with a particular problem, a description of the problem, a category, a sub-category, and any impact on customers.
The product dimension 105 contains information about a particular product. In one embodiment, the product dimension 105 comprises a unique identification number associated with each product, a product, a category, a sub-category, information regarding the launch of the product, and notes. In another embodiment, the product identification is a unique identification number associated with each purchase of a product.
The customer dimension 110 contains information about the customer. In one embodiment, the customer dimension 110 comprises a unique identification number associated with the customer; a company name; a location that includes an address, zip code, and country; a phone number; a role contact; an account start date; a first shipment date, a last shipment date; the number of goods or services purchased in the last month; the number of goods or services purchased in the last year; a statistics code; credit limits; and the type of industry associated with the customer. The time from purchase is divided into stages. For example, the first stage is the first three months after purchase of the product, the second stage is three to six months from purchase, etc.
In one embodiment, the customer dimension 110 contains both structured and unstructured data. Unstructured data can be free text or voice data. The free text is from chat, emails, blogs, etc. The voice data is translated into text using a voice-to-text transcriber. The customer interaction data engine 117 structures the data and merges it with the other data stored in the data warehouse 120.
The agent dimension 115 includes information about the people who receive phone calls from customers. The agent dimension 115 also contains both structured and unstructured data. The data is used to match customers with agents. For example, if a model predicts that a particular customer does not require a large amount of time, the call can be matched with a less experienced agent. On the other hand, if a model predicts that the customer requires a large amount of time or has a specialized problem, it is more efficient to match the customer with an experienced agent, especially if the agent has specialized knowledge with regard to a product purchased by the customer, angry customers from the south, etc.
In one embodiment, the agent dimension 115 includes a unique identification number associated with the agent, the agent's name, a list of the agent's skill sets, relevant experience in the company in general, relevant experience in particular areas in the company, interactions per day, customer satisfaction scores, gender, average handle times (AHT), holding time, average speed of answer (ASA), after call work (ACW) time, talk time, call outcome, call satisfaction (CSAT) score, net experience score (NES), and experience in the process. An AHT is the time from the moment a customer calls to the end of the interaction, i.e. talk and hold time. The ASA measures the time the agent spends speaking to the customer. The ACW is the time it takes the agent to complete tasks after the customer call, e.g. reordering a product for a customer, entering comments about the call into a database, etc.
CSAT is measured from survey data received from customers after a transaction is complete. The NES tracks agent cognitive capabilities, agent emotions, and customer emotions during the call by detecting keywords that are characterized as either positive or negative. In one embodiment, the NES algorithm incorporates rules about the proximity of phrases to agents, products, the company, etc. to further characterize the nature of the conversations.
The positive characteristics for cognitive capabilities include building trust, reviewability, factuality, attentiveness, responsiveness, greetings, and rapport building. The negative characteristics for cognitive capabilities include ambiguity, repetitions, and contradictions. Positive emotions include keywords that indicate that the parties are happy, eager, and interested. Negative emotions include keywords that indicate that the parties are angry, disappointed, unhappy, doubtful, and hurt.
In one embodiment, the customer interaction engine 117 calculates a positive tone score, a negative tone score, takes the difference between scores, and divides by the sum of the negative tone score and positive tone score. This equation is embodied in equation 1:
Σ(P1+P2+ . . . Px)−Σ(N1+N2+ . . . Ny)/Σ(P1+P2+ . . . Px)+Σ(N1+N2+ . . . Ny) (eq 1)
In another embodiment, the customer interaction engine 117 uses a weighted approach that gives more weight to extreme emotions. If the conversation takes place in written form, e.g. instant messaging, the algorithm gives more weights to bolded or italicized words. Furthermore, the algorithm assigns a score to other indicators such as emoticons.
Customer Interaction Data Engine
In one embodiment, the data from each of the dimensions is transferred to the customer interaction data engine 117 for data processing. The data received from the different dimensions is frequently received in different formats, e.g comma separated values (csv), tab delimited text files, etc. The data warehouse 120, however, stores data in columns. Thus, the customer interaction data engine 117 transforms the data into a proper format for storage in the data warehouse 120. This transformation also includes taking unstructured data, such as the text of a chat, and structuring it into the proper format. In one embodiment, the customer interaction data engine 117 receives data from the different dimensions via a file transfer protocol (FTP) from the websites of companies that provide goods and services.
In one embodiment, the data warehouse 120 is frequently updated with data. As a result, the data is increasingly accurate. This is particularly important when a model is generated for a customer using that customer's previous interactions with a company because those interactions are a strong predictor of future behavior.
Predictive Engine
The predictive engine 125 compiles the data from the data warehouse 120 and organizes the data into clusters known as contributing variables. Contributing variables are variables that have a statistically significant effect on the data. For example, the shipment date of a product affects the date that the customer reports a problem. Problems tend to arise after certain periods such as immediately after the customer receives the product or a year after use. Thus, shipment date is a contributing variable. Conversely, product identification is not a contributing variable because it cannot be correlated with any other factor.
Contributing variables are calculated according to whether the variable is numerical or a categorical prediction. Contributing variables for numbers are calculated using regression analysis algorithms, e.g. least squares, linear regression, a linear probability model, nonlinear regression, Bayesian linear regression, nonparametric regression, etc. Categorical predictions use different methods, for example, a neural network or a Naïve Bayes algorithm.
The contributing variables are used to generate models that predict trends, patterns, and exceptions in data through statistical analysis. In one embodiment, the predictive engine 130 uses a naïve Bayes algorithm to predict behavior. The following example is used to show how a naïve Bayes algorithm is used by the predictive engine 130 to predict the most common problems associated with a family in Arizona using mobile devices made by Nokia.
The problem to be solved by the predictive engine 130 is given a set of customer attributes, what are the top queries the customer is likely to have. Attributes are selected 300 for the model and for different levels of the attribute. Table 1 illustrates the different attributes: state, plan, handset, BznsAge, and days left on the plan. BznsAge is an abbreviation of business age, i.e. how long someone has been a customer. The attribute levels further categorize the different attributes.
Data from the problem dimension 100, product dimension 105, and agent dimension 110 are merged 305 with queries from text mining. These text mining queries are examples of unstructured data that was structured by the customer interaction data engine 117. Table 2 shows the merged data for the customer attributes. Table 3 shows the merged data for the queries from text mining, which include all the problems associated with the mobile phones.
Where a 1 represents a problem and a 0 represents no problem.
The conditional probability of Query Q to be asked by the customer if he possesses the attributes A1, . . . , An is determined by calculating 310 the probability p(Q) and calculate 315 the conditional probabilities p(Ai/Q) using the naïve Bayes algorithm:
p(Q/A1, . . . ,An)=p(Q)p(A1/Q)p(A2/Q) . . . p(An/Q) Eq. (3)
p (Q) is calculated as the ration of number times query Q that appears in the matrix to the summation of number of times all the queries Q1, . . . Qn occur. p (Ai/Q) is calculated differently for categorical and continuous data. The probabilities for all Queries based on the attributes are calculated and the top three problems are selected based on the value of probability.
For example, if a customer comes with the attributes (Arizona, Family, Nokia, 230, 120), the probability of Signal query can be calculated as follows:
p(Signal/Arizona,Family,Nokia,230,120)=p(Signal)p(Arizona/Signal)p(Family/Signal)p(BznsAge=230/Signal)p(DaysLeft=120/Signal)
p (Ai/Q) can be calculated as the ratio of the number of times attribute Ai appeared in all the cases when Query Q appeared to the number of times Query Q appeared.
The probabilities for Signal Query are:
p (Signal)=7/77 Signal query appears seven times while there are a total of 77 Queries (Tables 2 and 3). p (Arizona/Signal)=1/7 Arizona appears only once when Signal query occurs. p (Family/Signal)=1/7 Family appears only once when Signal query occurs. p (Nokia/Signal)=2/7 Nokia appears only once when Signal query occurs.
The Cancel Query is calculated the same way:
p (Cancel)=10/77. p (Arizona/Cancel)=4/10. p (Family/Cancel)=3/10. p (Nokia/Cancel)=3/10.
These conditional probabilities can be populated in a matrix, to be used in a final probability calculation. The cells with an * do not have a calculated probability, however, in an actual calculation these cells would have to be populated as well. Table 4 is a matrix populated with the conditional probabilities.
Assuming the data to be normally distributed, the probability density function is:
f(x)=(½π√σ)e^[(x−μ)2/2 σ2] Eq. (4)
Probability at a single point in any continuous distribution is zero. Probability for a small range of a continuous function is calculated as follows:
P(x−Δx/2<X<x+Δx/2)=Δxf(x) Eq. (5)
Treating this as the probability for a particular value, we can neglect Δx because this term appears in all the probabilities calculated for each Query/Problem. Hence the density function f(x) is used as the probability, which is calculated 325 for a particular numeric value X from its formula. The mean (μ) and standard deviation (σ) for the assumed normal distribution can be calculated 320 as per the following formulas:
μ=1/n(Σni=1Xi) Eq. (6)
σ=√[1/n−1(Σni=1(Xi−μ)2] Eq. (7)
For signal problem, the mean and standard deviation for BznsAge are calculated using equations 6 and 7:
μBznsAge=(375+234+296+311+186+276+309)/7=283.85
σBznAge=1/6[(375−283.85)2+(234−283.85)2+(296−283.85)2+(311−283.85)2+(186−283.85)2+(276−283.85)2+(309−283.85)2=60.47
p(BznsAge=230/Signal)=[1/(2 π√60.47)]e^(230−283.85)2/60.472=0.029169
Similarly for Days_Left: μDaysLeft=213, σDaysLeft=72.27, p (DaysLeft=120/Signal)=0.04289
For Cancel, the mean and standard deviation are for BznsAge and Days_Left are calculated in similar fashion: μBznsAge=248.5, σBznAge=81.2, p(BznsAge=230/Cancel)=0.018136, μDaysLeft=230.4, σDaysLeft=86.51, p(DaysLeft=120/Cancel)=0.03867.
These probabilities are calculated in real time, with the exact value of the attribute possessed by the customer. Table 5 is a matrix populated with the mean and standard deviations, which are further used for the probability calculation in real time.
The final probability for a query/problem for the given set of attributes is calculated 330:
p(Signal/Arizona,Family,Nokia,230,120)=7/77*1/7*1/7*2/7*0.029169*0.04289=0.0000006632
p(Cancel/Arizona,Family,Nokia,230,120)=10/77*4/10*3/10*3/10*0.018136*0.03867=0.0000032789
The probabilities are normalized 335 and the top three probabilities and corresponding queries/problems are selected 340.
Normalization:
p(Signal/Arizona,Family,Nokia,230,120)=(0.0000006632/0.0000006632+0.0000032789)*100=83.17%
p(Cancel/Arizona,Family,Nokia,230,120)=(0.0000032789/0.0000032789+0.0000006632)*100=16.83%
Thus, in this example, the signal problem has a significantly higher probability of occurring as compared to the cancel problem.
If any conditional probability for an attribute is zero, the zero cannot be used in calculations because any product using zero is also zero. In this situation, the Laplace estimator is used:
p(Ai/Q)=(x+1/y+n) Eq. (8)
Where 1/n is the prior probability of any query/problem. If the x and y were not present in the equation, the probability would be 1/n. In this equation, even if x is 0, the conditional probability is nonzero.
Models
This requires sorting the data in order of magnitude, moving between high-level organization, e.g. general trends to low-level views of data, i.e. the details. In addition, it is possible to drill up and down through levels in hierarchically structured data and change the view of the data, e.g. switch the view from a bar graph to a pie graph, view the graph from a different perspective, etc.
In one embodiment, the models represent data with text tables, aligned bars, stacked bars, discrete lines, scatter plots, Gantt charts, heat maps, side-by-side bars, measure bars, circles, scatter matrices, histograms, etc.
The predictive engine 125 predicts information such as the probability of a customer to face a particular problem based on the customer's engagement stage with a particular problem. An engagement stage is product specific. In one embodiment, the engagement stage is measured by time, e.g. the first stage is 0-15 days after purchase, the second stage is 15 days-two months after purchase, etc. In addition, the model predicts a customer's preference of a particular channel based on the type of concern and its impact on customer experience. Lastly, the models predict the probable impact of a particular problem on the customer's loyalty, growth, and profitability score.
Once the model is generated, a business can use the system to predict how to best serve their clients, e.g. whether to staff more customer service representatives or invest in developing a sophisticated user interface for receiving orders. In another embodiment, the model is used in real-time to predict customer behavior. For example, a customer calls a company and based on the customer's telephone number, the customer ID is retrieved and the company predicts a user interaction mode. If the user interaction mode is a telephone conversation with an agent, the model assigns an agent to the customer.
In another example, the model is incorporated into a website. A customer visits the website and requests interaction. The website prompts the user for information, for example, the user name and product. The model associates the customer with that customer's personal data or the model associates the customer with a cluster of other customers with similar shopping patterns, regional location, etc. The model predicts a user interaction mode based on the answers. For example, if the system predicts that the customer is best served by communicating over the phone with an agent, the program provides the user with a phone number of an agent.
Services
In one embodiment, the services are categorized as booking, providing a quote, tracking a package, inquiries on supplies, and general inquiries. The model prediction is applied to customer categories according to these services. For example, a model predicts a customer's preference for a particular channel when the customer wants a quote for a particular good or service.
Generating Models
The predictive engine 125 is now ready to build models. In one embodiment, the user selects 420 data sources, which are used to build 425 models. By experimenting with different variables in the models, the predictive engine 125 tests and validates 430 different models. Based on these models, the predictive engine 125 identifies key contributing variables 435 and builds interaction methodologies to influence the outputs 440. As more data is received by the data warehouse 120, these steps are repeated to further refine the model.
In one embodiment, the predictive engine 125 receives 445 a request in real time to generate a predictive model. The predictive engine 125 builds 425 a model using the data received by the user. For example, the predictive engine 125 may receive an identifying characteristic of a customer, e.g. unique ID, telephone number, name, etc. and be asked to predict the best mode of communicating with this customer, the most likely reason that the customer is calling, etc.
In another embodiment, the predictive engine receives a request for a model depicting various attributes as selected by the user from a user interface such as the one illustrated in
In one embodiment, the model generator 130 generates a model with multiple variables.
In one embodiment, the model generator 130 generates a model with multiple sections to depict more detailed information.
Customer problems can also be illustrated in two dimensional graphs.
The models can also predict future actions.
The system for predicting customer behavior can be implemented in computer software that is stored in a computer-readable storage medium on a client computer.
In one embodiment, the client 1700 receives data 1715 via a network 1720. The network 1720 can comprise any mechanism for the transmission of data, e.g., internet, wide area network, local area network, etc.
As will be understood by those familiar with the art, the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Likewise, the particular naming and division of the members, features, attributes, and other aspects are not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, divisions and/or formats. Accordingly, the disclosure of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following Claims.
This patent application is a continuation of U.S. patent application Ser. No. 12/392,058, filed Feb. 24, 2009, which is a continuation-in-part of U.S. patent application Ser. No. 11/360,145, filed Feb. 22, 2006, which issued as U.S. Pat. No. 7,761,321 on Jul. 20, 2010, and claims the benefit of U.S. provisional patent application Ser. No. 61/031,314, filed Feb. 25, 2008, the entirety of each of these applications are incorporated herein by this reference thereto.
Number | Name | Date | Kind |
---|---|---|---|
5684872 | Flockhart et al. | Nov 1997 | A |
6014647 | Nizzari et al. | Jan 2000 | A |
6023681 | Whitt | Feb 2000 | A |
6151571 | Pertrushin | Nov 2000 | A |
6157655 | Shtivelman | Dec 2000 | A |
6212502 | Ball et al. | Apr 2001 | B1 |
6330326 | Whitt | Dec 2001 | B1 |
6353810 | Petrushin | Mar 2002 | B1 |
6389028 | Bondarenko et al. | May 2002 | B1 |
6519335 | Bushnell | Feb 2003 | B1 |
6523022 | Hobbs | Feb 2003 | B1 |
6542602 | Elazar et al. | Apr 2003 | B1 |
6615172 | Bennett et al. | Sep 2003 | B1 |
6665644 | Kanevsky et al. | Dec 2003 | B1 |
6728363 | Lieberman et al. | Apr 2004 | B2 |
6798876 | Bala | Sep 2004 | B1 |
6807274 | Joseph et al. | Oct 2004 | B2 |
6859529 | Duncan et al. | Feb 2005 | B2 |
6898190 | Shtivelman et al. | May 2005 | B2 |
6938000 | Joseph et al. | Aug 2005 | B2 |
6956941 | Duncan et al. | Oct 2005 | B1 |
7027586 | Bushey et al. | Apr 2006 | B2 |
7103553 | Applebaum | Sep 2006 | B2 |
7110998 | Bhandari et al. | Sep 2006 | B1 |
7120700 | Macleod et al. | Oct 2006 | B2 |
7165037 | Lazarus et al. | Jan 2007 | B2 |
7181444 | Porter et al. | Feb 2007 | B2 |
7194483 | Mohan et al. | Mar 2007 | B1 |
7215759 | Brown et al. | May 2007 | B2 |
7269516 | Brunner et al. | Sep 2007 | B2 |
7353035 | Kupsh et al. | Apr 2008 | B1 |
7398224 | Cooper | Jul 2008 | B2 |
7577246 | Idan et al. | Aug 2009 | B2 |
7599861 | Peterson | Oct 2009 | B2 |
7707059 | Reed | Apr 2010 | B2 |
7761321 | Kannan et al. | Jul 2010 | B2 |
7778863 | Yoshida et al. | Aug 2010 | B2 |
7787609 | Flockhart et al. | Aug 2010 | B1 |
7792278 | Watson et al. | Sep 2010 | B2 |
7848909 | Kraiss et al. | Dec 2010 | B2 |
7953219 | Freedman et al. | May 2011 | B2 |
7996251 | Kannan et al. | Aug 2011 | B2 |
8229101 | Williams | Jul 2012 | B1 |
20010016814 | Hauenstein | Aug 2001 | A1 |
20020047859 | Szlam et al. | Apr 2002 | A1 |
20020083067 | Tamayo | Jun 2002 | A1 |
20020087385 | Vincent | Jul 2002 | A1 |
20020114442 | Lieberman et al. | Aug 2002 | A1 |
20020141561 | Duncan et al. | Oct 2002 | A1 |
20020156797 | Lee et al. | Oct 2002 | A1 |
20020196926 | Johnson et al. | Dec 2002 | A1 |
20030028448 | Joseph et al. | Feb 2003 | A1 |
20030100998 | Brunner et al. | May 2003 | A2 |
20030123640 | Roelle et al. | Jul 2003 | A1 |
20030144895 | Aksu | Jul 2003 | A1 |
20030174830 | Boyer et al. | Sep 2003 | A1 |
20030200135 | Wright | Oct 2003 | A1 |
20040005047 | Joseph et al. | Jan 2004 | A1 |
20040098274 | Dezonno et al. | May 2004 | A1 |
20040117383 | Lee et al. | Jun 2004 | A1 |
20040139426 | Wu | Jul 2004 | A1 |
20040141508 | Schoeneberger et al. | Jul 2004 | A1 |
20040249650 | Freedman et al. | Dec 2004 | A1 |
20050004978 | Reed et al. | Jan 2005 | A1 |
20050041796 | Joseph et al. | Feb 2005 | A1 |
20050091038 | Yi et al. | Apr 2005 | A1 |
20050097159 | Skidgel | May 2005 | A1 |
20050102292 | Tamayo | May 2005 | A1 |
20050147090 | MacLeod et al. | Jul 2005 | A1 |
20050207559 | Shtivelman et al. | Sep 2005 | A1 |
20050234697 | Pinto et al. | Oct 2005 | A1 |
20050246422 | Laning | Nov 2005 | A1 |
20060031469 | Clarke et al. | Feb 2006 | A1 |
20060083362 | Anisimov et al. | Apr 2006 | A1 |
20060143025 | Jeffery et al. | Jun 2006 | A1 |
20060277550 | Williams et al. | Dec 2006 | A1 |
20070021966 | Ellefson et al. | Jan 2007 | A1 |
20070043608 | May et al. | Feb 2007 | A1 |
20070116239 | Jacobi et al. | May 2007 | A1 |
20070198323 | Bourne et al. | Aug 2007 | A1 |
20070198359 | Kannan et al. | Aug 2007 | A1 |
20070198368 | Kannan et al. | Aug 2007 | A1 |
20070206584 | Fulling | Sep 2007 | A1 |
20070214000 | Shahrabi et al. | Sep 2007 | A1 |
20070244738 | Chowdhary et al. | Oct 2007 | A1 |
20080167952 | Blair | Jul 2008 | A1 |
20090097634 | Nambiar et al. | Apr 2009 | A1 |
20090132448 | Eder | May 2009 | A1 |
20090190746 | Chishti et al. | Jul 2009 | A1 |
20090190749 | Xie et al. | Jul 2009 | A1 |
20090222313 | Kannan et al. | Sep 2009 | A1 |
20100262549 | Kannan et al. | Oct 2010 | A1 |
20100332287 | Gates et al. | Dec 2010 | A1 |
Number | Date | Country |
---|---|---|
0926614 | Jun 1999 | EP |
1513088 | Mar 2005 | EP |
Entry |
---|
Denton, H. , “Call Center Brings Quantifiable Advantages to Bottom Line”, Electric Light and Power, Jul. 1998, 3 pages. |
Gans, Noah et al., “Telephone Call Centers: A Tutorial and Literature Review”, in Manufacturing and Service Operations Management 5, No. 2; retrieved from url http://www.columbia.edu/˜ww2040/tutorial.pdf, Sep. 2, 2002, pp. 1-80. |
Garver, Michael S. , “Using Data Mining for Customer Satisfaction Research”, Marketing Research, vol. 14, Issue 1; Chicago, Spring 2002, 7 pgs. |
Grigori, Daniela et al., “Improving Business Process Quality through Exception Understanding, Prediction, and Prevention”, Sep. 2001, 10 pages. |
Jones, C. , “Entrepreneur Offers Chance to Jump the Queue”, Times Higher Education Supplement, No. 1718, Nov. 18, 2005, 2 pages. |
Nath, Shyam V. et al., “Customer Churn Analysis in the Wireless Industry: A Data Mining Approach”, US Proceedings Annual Meeting of the Decision Sciences Institute, Dec. 2003, 20 pages. |
Trembly, Ara C. , “Mining Free-Form Data Enables Better Customer Service”, National Underwriter (Life, health/financial services ed.), vol. 107, Issue 43, Oct. 27, 2003, 3 pages. |
Van Den Poel, Dirk et al., “Predicting online-purchasing behaviour”, Belgium. European Journal of Operational Research 166, Jul. 8, 2004, pp. 557-575. |
Number | Date | Country | |
---|---|---|---|
20150242860 A1 | Aug 2015 | US |
Number | Date | Country | |
---|---|---|---|
61031314 | Feb 2008 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12392058 | Feb 2009 | US |
Child | 14707904 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11360145 | Feb 2006 | US |
Child | 12392058 | US |