The present invention relates to methods of preventing healthcare fraud-waste-abuse and more specifically to employing artificial intelligence machines to limit financial losses and detecting unwarranted reimbursements.
Healthcare fraud, waste, and abuse have blossomed in recent years because deep pockets like the Government and large insurance companies are now more than ever paying all the bills.
Insurance Companies and Government quite obviously try to control fraud, waste, and abuse, but their third party, after-the-fact status to medical treatments makes them less able and less effective in controlling this kind of fraud. Hospitals, clinics, pharmaceutical companies, and other healthcare providers in general have stepped in to exploit these inadequacies. Costs, as a direct result have spiraled beyond all reason.
Medicare fraud is legally defined to include knowingly and willfully executing, or attempting to execute, a scheme or ploy to defraud the Medicare program, or obtaining information by means of false pretenses, deception, or misrepresentation in order to receive inappropriate payment from the Medicare program. The most frequent kinds of fraud are false statements and misrepresentations of entitlement or payment under the Medicare program.
The Centers for Medicare & Medicaid Services (CMMS) defines the kind of fraud they fight as “the intentional deception or misrepresentation that the individual knows to be false or does not believe to be true, and the individual makes knowing that the deception could result in some unauthorized benefit to himself or herself or some other person.”
Presumably, the vast majority of government contractors who provide goods and services to the government are honest, as are most vendors serving private purchasers. Nevertheless, even a small fraction amounts to a substantial level fraud is directed at the Government, and thereby all of us.
The particular kinds of healthcare fraud we all suffer from includes:
Physicians or other healthcare practitioners are obvious cradles of healthcare fraud, but healthcare fraud wrongdoers also include:
Better methods to combat fraud, waste and abuse, information are not limited to that included in the claims. The most useful kinds of supplemental information include non-claims based utilization data or actual clinical data from an EMR, and Pharmacy claims or transactions.
Improvements in detecting waste and abuse in healthcare will require a different approach, a comprehensive rethinking of the waste and abuse crisis. Waste, fraud, and leakage in the industry is the major problem. Illegal activity, while significant in absolute numbers, is trivial when compared to $2.8T in annual healthcare spending. Solutions must focus on the breadth of leakage. For example, a simple excessive billing of preventive visits (Evaluation and Management claims) results in $20-$30 additional billed per visit. With one billion primary care physician visits each year, that kind of leakage is larger than the entire fraud recoveries for the market in a single year.
Almost all conventional analytic solutions, operate within extremely rigid boundaries, even those that propose to be non-hypothesis based. They are either designed or tuned to look at various scenarios in such a way that they will only catch a limited range of leakage problems. When something truly surprising happens, or variation occurs that is not anticipated, these models prove to be worthless.
Working solutions require a new approach, new algorithms and models that are not already trained or constrained within the boundaries of known scenarios. Technology that is designed to aggregate scenarios to get at large leakage issues easily and identify systemic issues that plague the system. Highly trained “eyes” are needed on the data output that can process raw data rapidly and efficiently.
Briefly, a method embodiment of the present invention of preventing healthcare fraud-waste-abuse uses artificial intelligence machines to limit financial losses. Healthcare payment request claims are analyzed by predictive models and their behavioral details are compared to running profiles unique to each healthcare provider submitting the claims. A decision results that the healthcare payment request claim is or is not fraudulent-wasteful-abusive. If it is, a second analysis of a group behavioral in which the healthcare provider is clustered is compared to a running profile unique to each group of healthcare providers submitting the claims. An overriding decision results if the instant healthcare payment request claim is not fraudulent-wasteful-abusive according to group behavior.
The above and still further objects, features, and advantages of the present invention will become apparent upon consideration of the following detailed description of specific embodiments thereof, especially when taken in conjunction with the accompanying drawings.
Method embodiments of the present invention leverage artificial intelligence machines in the prevention of healthcare fraud-waste-abuse by individual and groups of providers submitting payment claims. My earlier U.S. patent application Ser. No. 14/517,872, filed Oct. 19, 2014, titled, HEALTHCARE FRAUD PROTECTION AND MANAGEMENT, is incorporated in full herein by reference.
I describe a data cleanup method in my U.S. patent application Ser. No. 14/935,742, DATA CLEAN-UP METHOD FOR IMPROVING PREDICTIVE MODEL TRAINING, filed Nov. 9, 2015, that would be useful in harmonizing and trimming away irrelevant, excess, and useless information received in these data records. I also describe a data enrichment method in my U.S. patent application Ser. No. 14/941,586, METHOD OF OPERATING ARTIFICIAL INTELLIGENCE MACHINES TO IMPROVE PREDICTIVE MODEL TRAINING AND PERFORMANCE, filed Nov. 14, 2015, that describes how the healthcare payment request claim data, non-claim based utilization data, actual clinical data, and pharmacy claim or transaction data records can be usefully combined to improve the performance of predictive models and smart agent profiling. Both such United States patent applications are parents to this continuation-in-part application that also continues-in-part from the HEALTHCARE FRAUD PROTECTION AND MANAGEMENT patent application.
A key descriptive attribute in widespread use in the healthcare field is the Diagnosis Related Group (DRG) code. This 3-digit code helps to organize diagnoses and procedures into clinically cohesive groups that demonstrate similar consumption of hospital resources. It is a rough form of fuzzification that can help artificial intelligence machines deal with unimportant nuances in data through a sort of clustering of statistical information. In general, fuzzification is a process of transforming crisp values into grades of membership, e.g., infants 0-1, toddlers 2-5, children 6-12, teenagers 13-19, and adults 20+. The healthcare attributes that properly contribute to a particular DRG classification are well understood. Odd value or simply odd, abnormal attributes that coexist in a healthcare payment claim with a proffered DRG are symptomatic of fraud. So when a predictive model like a decision tree or a case-based reasoning classifies a different DRG that the proffered one, then fraud is a likely culprit.
The healthcare providers are correlated in a step 104 to particular ones of the incoming healthcare payment request claims with the processor and an algorithm that generates and maintains a unique smart agent profile in the computer memory storage device for each healthcare provider. An example of this correlation is represented in
A healthcare provider profile uniquely associated with an healthcare provider is accessed in a step 106 with the processor and an algorithm that compares the unique smart agent profile to an instant incoming healthcare payment request claim.
Particular ones of the incoming healthcare payment request claims are classified in a step 108 according to a fraud-waste-abuse criteria with the processor and an algorithm that includes a predictive model trained on an accumulation of supervised and unsupervised healthcare payment request claims previously submitted by essentially the same healthcare providers. And particular ones of the incoming healthcare payment request claims are classified in a step 110 with the processor and an algorithm that applies a unique individual behavior criteria based on a comparison of an individual's past behaviors extracted and recorded in their unique healthcare provider profile and an instant behavior evident in the instant incoming healthcare payment request claim stored in the computer memory storage device.
A decision is issued in a step 112 with the processor and an algorithm that decides an instant healthcare payment request claim is fraudulent-wasteful-abusive or not-fraudulent-wasteful-abusive based on a combination of a fraud-waste-abuse criteria classification and a unique individual behavior criteria classification stored in the computer memory storage device.
A unique healthcare provider profile of each healthcare provider stored in the computer memory storage device is updated with the decision in a step 114.
Steps 102-114 are then repeated as more incoming healthcare payment request claim records are received. If a step 116 decides the process is done, e.g., the instant payment request claim was judged non-fraudulent.
Deviant individual behaviors judged as fraudulent-wasteful-abusive may nevertheless be non-fraudulent-wasteful-abusive even though an individual behavioral analysis classifies an instant incoming healthcare payment request claim that way. If clustering identifies the individual as belonging to a group, and the instant behavior is consistent with behavior profiles maintained for that group, then the instant incoming healthcare payment request claim may be more properly classified with a decision that it is non-fraudulent. In order to implement this, the method continues with more steps.
Clusters of healthcare providers that share a group behavior are identified in a step 118.
Groups of healthcare providers are associated to particular ones of the incoming healthcare payment request claims in a step 120.
A healthcare provider profile uniquely associated with a group of healthcare providers is accessed in a step 122 and compared to an instant incoming healthcare payment request claim.
Particular ones of the incoming healthcare payment request claims are classified in a step 124 according to a group-behavior criteria and based on a comparison of past behaviors extracted and recorded in their unique healthcare provider profile and an instant behavior evident in the instant incoming healthcare payment request claim.
An overriding decision is issued in a step 126 with the processor and an algorithm that decides an instant healthcare payment request claim is fraudulent-wasteful-abusive or not-fraudulent-wasteful-abusive based on a combination of a fraud-waste-abuse criteria classification and a group behavior criteria classification stored in the computer memory storage device.
A unique healthcare provider profile of each group of healthcare providers stored in the computer memory storage device is updated with the overriding decision in a step 128.
Step 210 represents an opportunity for a fraudulent coding, e.g., one that does not comport with the symptoms recorded in step 202 and the diagnostic test results in step 208. Any prescriptions needed are written in a step 212. A procedure code is entered in a step 214. Step 214 represents another opportunity for fraudulent coding, e.g., a procedure that does not comport with the diagnostic code recorded in step 210. A step 216 represents an encoding by the healthcare provider of the payment claim. These represent the many payment claims received by step 102 in
The data enrichment algorithm 408 is more fully described in my recent U.S. patent application Ser. No. 14/941,586, filed Nov. 9, 2015, and titled, METHOD OF OPERATING ARTIFICIAL INTELLIGENCE MACHINES TO IMPROVE PREDICTIVE MODEL TRAINING AND PERFORMANCE. Such application is incorporated herein, in full, by reference. The non-claim data 410 represents facts already known about the healthcare provider submitting the payment claim record and/or details related to other claim attributes.
An enriched data 412 results that is used by a processor with an algorithm 414 that builds decision trees, case-based reasoning logic, smart agent profiles (for every healthcare provider and payment claim attribute), and other predictive models as detailed in the two patent applications just mentioned.
Instructions 416, 418, and 420, respectively describe how to structure run-phase data cleaning, data enrichment, and predictive models.
In a run-phase, as represented more fully in
Step 102 in
Each Claim includes data fields for five-digit diagnosis codes and four-digit procedure codes.
Detection of up-coding fraud includes analyzing symptoms and test results. Detecting upcoding fraud is done with a processor and an algorithm that tests each primary diagnosis for cause-and-effect.
Below are some examples of DRGs upcoding:
DRG 475 (respiratory system diagnosis with ventilator support) vs. DRG 127 (heart failure and shock)
Principle diagnosis of respiratory failure (518.81) with a secondary diagnosis of congestive heart failure (428.0) and a procedure code of 96.70, 96.71 or 96.72 (continuous mechanical ventilation).
The hospital bills the respiratory failure as the principal diagnosis but the respiratory failure was due to the patient's congestive heart failure, which by coding guidelines should have been the principal diagnosis
DRG 287 (skin grafts and wound debridement for endocrine, nutritional and metabolic disorders) vs. DRG 294 (diabetes, age greater than 35) or DRG 295 (diabetes, age 0-35)
Principal diagnosis of diabetes mellitus (250.xx) with a principal procedure of excisional debridement of wound, infection or burn (86.22).
The hospital bills for the excisional debridement of a wound (86.22) when, in fact, a non-excisional debridement (86.28) was performed on the patient. This changes the DRG to 294 or 295 (depending on the age of the patient).
DRG 297 (nutritional and miscellaneous metabolic disorders, age greater than 17) and 320 (kidney and urinary tract infections, age greater than 17) vs. DRG 383 (other antepartum diagnoses with medical complications)
For example, a learning database of historical data has 46,933 records and a testing database has 56,976 Records. The first database includes one extra attribute which is used for learning the correct class.
The DRG (Diagnostic Related Group) class attribute is the output that defines what the model will predict. The others attributes are its inputs: they are used to create the model.
Record Example
Unsupervised Learning of Normal and Abnormal Behavior
Each field or attribute in a data record is represented by a corresponding smart-agent. Each smart-agent representing a field will build what-is-normal (normality) and what-is-abnormal (abnormality) metrics regarding other smart-agents.
Apparatus for creating smart-agents is supervised or unsupervised. When supervised, an expert provides information about each domain. Each numeric field is characterized by a list of intervals of normal values, and each symbolic field is characterized by a list of normal values. It is possible for a field to have only one interval. If there are no intervals for an attribute, the system apparatus can skip testing the validity of its values, e.g., when an event occurs.
As an example, a doctor (expert) can give the temperature of the human body as within an interval [35° C.: 41° C.], and the hair colors can be {black, blond, red}.
1) For each field “a” of a Table:
An unsupervised learning process uses the following algorithm:
Θmin represents the minimum number of elements an interval must include. This means that an interval will only be take into account if it encapsulates enough values, so its values will be considered normal because frequent;
the system apparatus defines two parameters that is modified:
the maximum number of intervals for each attribute nmax; the minimum frequency of values in each interval fImin; Θmin is computed with the following method:
Θmin=fImin*number of records in the table.
Θdist represents the maximum width of an interval. This prevents the system apparatus from regrouping some numeric values that are too disparate. For an attribute a, let's call mina the smallest value of a on the whole table and maxa the biggest one. Then:
Θdist=(maxa−mina)/nmax
For example, consider a numeric attribute of temperature with the following values:
The first step is to sort and group the values into “La”:
“La”={(64, 1) (65, 1) (68, 1) (69, 1) (70, 1) (71, 1) (72, 2) (75, 2) (80, 1) (81, 1) (83, 1) (85, 1)}
Then the system apparatus creates the intervals of normal values:
Consider fImin=10% and nmax=5 then Θmin=1.4 and Θdist=(85−64)/5=4.2
When a new event occurs, the values of each field are verified with the intervals of the normal values it created, or that were fixed by an expert. It checks that at least: one interval exists. If not, the field is not verified. If true, the value inside is tested against the intervals, otherwise a warning is generated for the field.
During creation, dependencies between two fields are expressed as follows:
When the field 1 is equal to the value v1, then the field 2 takes the value v2 in significant frequency p.
Example: when species is human the body temperature is 37.2° C. with a 99.5% accuracy.
Given cT is the number of records in the whole database. For each attribute X in the table:
Retrieve the list of distinct values for X with the cardinality of each value:
If true, for each attribute Y in the table, Y≠X
Retrieve the list of distinct values for Y with the cardinality of each value:
Retrieve the number of records cij where (X=xi) and (Y=yj). If the relation is significant, save it: if (cij/cxi)>Θxy then save the relation [(X=xi)⇒(Y=yj)] with the cardinalities cxi, cyj and cij.
The accuracy of this relation is given by the quotient (cij/cxi)
Verify the coherence of all the relations: for each relation
[(X=xi)⇒(Y=yj)] (1)
Search if there is a relation
[(Y=yj)⇒(X=xk)] (2)
If xi≠xk remove both relations (1) and (2) from the model otherwise it will trigger a warning at each event since (1) and (2) cannot both be true.
To find all the dependencies, the system apparatus analyses a database with the following algorithm:
The default value for Θx is 1%: the system apparatus will only consider the significant value of each attribute.
The default value for Θxy is 85%: the system apparatus will only consider the significant relations found.
A relation is defined by: (Att1=v1)⇒(Att2=v2) (eq).
All the relations are stored in a tree made with four levels of hash tables, e.g., to increase the speed of the system apparatus. A first level is a hash of the attribute's name (Att1 in eq); a second level is a hash for each attribute the values that imply some correlations (v1 in eq); a third level is a hash of the names of the attributes with correlations (Att2 in eq) to the first attribute; a fourth and last level has values of the second attribute that are correlated (v2 in eq).
Each leaf represents a relation. At each leaf, the system apparatus stores the cardinalities cxi, cyj and cij. This will allow the system apparatus to incrementally update the relations during its lifetime. Also it gives:
Consider an example with two attributes, A and B:
There are ten records: cT=10.
Consider all the possible relations:
With the defaults values for Θx and Θxy, for each possible relation, the first test (cxi/cT)>Θx is successful (since Θx=1%) but the relations (1) and (7) would be rejected (since Θxy=85%).
Then the system apparatus verifies the coherence of each remaining relation with an algorithm:
(A=2)⇒(B=1) is coherent with (B=1)⇒(A=2);
(A=3)⇒(B=2) is not coherent since there is no more relation (B=2)⇒ . . . ;
(B=4)⇒(A=1) is not coherent since there is no more relation (A=1)⇒ . . . ;
(B=3)⇒(A=1) is not coherent since there is no more relation (A=1)⇒ . . . ;
(B=1)⇒(A=2) is coherent with (A=2)⇒(B=1).
The system apparatus classifies the normality/abnormality of each new event in real-time during live production and detection.
For each event couple attribute/value (X, xi):
Looking in the model for all the relations starting by [(X=xi)⇒ . . . ]
The system apparatus incrementally learns with new events:
Increment cT by the number or records in the new table T.
For each relation [(X=xi)⇒(Y=yj)] previously created:
In general, a process for fraud-waste-abuse protection comprises training a variety of real-time, risk-scoring fraud-waste-abuse models with training data selected for each from a common transaction history that then specialize each member in its overview of a selected vertical claim processing financial transactional channel. The variety of real-time, risk-scoring fraud-waste-abuse models is arranged after the training into a parallel arrangement so that all receive a mixed channel flow of real-time claim data or authorization requests. The parallel arrangement of diversity trained real-time, risk-scoring fraud-waste-abuse models is hosted on a network server platform for real-time risk scoring of the mixed channel flow of real-time claim data or authorization requests. Risk thresholds are updated without delay for particular healthcare providers, and other healthcare providers in every one of the parallel arrangement of diversity trained real-time, risk-scoring fraud-waste-abuse models when any one of them detects a suspicious or outright fraudulent-wasteful-abusive claim data or authorization request for the healthcare provider.
Such process for fraud-waste-abuse protection can further comprise steps for building a population of real-time and a long-term and a recursive profile for each the healthcare provider in each the real-time, risk-scoring fraud-waste-abuse models. Then during real-time use, maintaining and updating the real-time, long-term, and recursive profiles for each healthcare provider in each and all of the real-time, risk-scoring fraud-waste-abuse models with newly arriving data.
Incremental learning technologies are embedded in the machine algorithms and smart-agent technology. These are continually re-trained with at least one processor and an algorithm that machine-learns from any false positives and negatives that occur to avoid repeating classification errors. Any data mining logic incrementally changes its decision trees by creating a new link or updates any existing links and weights, and any neural networks update a weight matrix, and any case-based reasoning logic update a generic case or creates a new one, and any corresponding smart-agents update their profiles by adjusting a normal/abnormal threshold stored in a memory storage device.
Although particular embodiments of the present invention have been described and illustrated, such is not intended to limit the invention. Modifications and changes will no doubt become apparent to those skilled in the art, and it is intended that the invention only be limited by the scope of the appended claims.
The current patent application is a continuation patent application which claims priority benefit with regard to all common subject matter to identically-titled U.S. patent application Ser. No. 14/986,534, filed Dec. 31, 2015, which, itself, is: (A) a continuation-in-part application of and claims priority benefit with regard to all common subject matter to U.S. patent application Ser. No. 14/815,848, filed Jul. 31, 2015, entitled AUTOMATION TOOL DEVELOPMENT METHOD FOR BUILDING COMPUTER FRAUD MANAGEMENT APPLICATIONS, which, itself, is a continuation-in-part application of and claims priority benefit with regard to all common subject matter to U.S. patent application Ser. No. 14/514,381, filed Oct. 15, 2014, and entitled ARTIFICIAL INTELLIGENCE FRAUD MANAGEMENT SOLUTION; (B) a continuation-in-part application of and claims priority benefit with regard to all common subject matter to U.S. patent application Ser. No. 14/521,667, filed Oct. 23, 2014, and entitled BEHAVIOR TRACKING SMART AGENTS FOR ARTIFICIAL INTELLIGENCE FRAUD PROTECTION AND MANAGEMENT; (C) a continuation-in-part application of and claims priority benefit with regard to all common subject matter to U.S. patent application Ser. No. 14/815,934, filed Jul. 31, 2015, entitled METHOD FOR DETECTING MERCHANT DATA BREACHES WITH A COMPUTER NETWORK SERVER; (D) a continuation-in-part application of and claims priority benefit with regard to all common subject matter to U.S. patent application Ser. No. 14/517,771, filed Oct. 17, 2014, entitled REAL-TIME CROSS-CHANNEL FRAUD PROTECTION; (E) a continuation-in-part application of and claims priority benefit with regard to all common subject matter to U.S. patent application Ser. No. 14/517,872, filed Oct. 19, 2014, entitled HEALTHCARE FRAUD PROTECTION AND MANAGEMENT; and (F) a continuation-in-part application of and claims priority benefit with regard to all common subject matter to U.S. patent application Ser. No. 14/935,742, filed Nov. 9, 2015, entitled DATA CLEAN-UP METHOD FOR IMPROVING PREDICTIVE MODEL TRAINING. The listed earlier-filed non-provisional applications are hereby incorporated by reference in their entireties into the current patent application.
Number | Name | Date | Kind |
---|---|---|---|
5377354 | Scannell et al. | Dec 1994 | A |
5692107 | Simoudis et al. | Nov 1997 | A |
5819226 | Gopinathan et al. | Oct 1998 | A |
5822741 | Fischthal | Oct 1998 | A |
6009199 | Ho | Dec 1999 | A |
6026397 | Sheppard | Feb 2000 | A |
6029154 | Pettitt | Feb 2000 | A |
6122624 | Tetro et al. | Sep 2000 | A |
6161130 | Horvitz et al. | Dec 2000 | A |
6254000 | Degen et al. | Jul 2001 | B1 |
6272479 | Farry et al. | Aug 2001 | B1 |
6330546 | Gopinathan et al. | Dec 2001 | B1 |
6347374 | Drake et al. | Feb 2002 | B1 |
6424997 | Buskirk, Jr. et al. | Jul 2002 | B1 |
6453246 | Agrafiotis et al. | Sep 2002 | B1 |
6535728 | Perfit et al. | Mar 2003 | B1 |
6601048 | Gavan et al. | Jul 2003 | B1 |
6647379 | Howard et al. | Nov 2003 | B2 |
6711615 | Porras et al. | Mar 2004 | B2 |
6782375 | Abdel-Moneim et al. | Aug 2004 | B2 |
6889207 | Slemmer et al. | May 2005 | B2 |
7007067 | Azvine et al. | Feb 2006 | B1 |
7036146 | Goldsmith | Apr 2006 | B1 |
7089592 | Adjaoute | Aug 2006 | B2 |
7165037 | Lazarus et al. | Jan 2007 | B2 |
7251624 | Lee et al. | Jul 2007 | B1 |
7403922 | Lewis et al. | Jul 2008 | B1 |
7406502 | Oliver et al. | Jul 2008 | B1 |
7433960 | Dube et al. | Oct 2008 | B1 |
7457401 | Lawyer et al. | Nov 2008 | B2 |
7464264 | Goodman et al. | Dec 2008 | B2 |
7483947 | Starbuck et al. | Jan 2009 | B2 |
7562122 | Oliver et al. | Jul 2009 | B2 |
7631362 | Ramsey | Dec 2009 | B2 |
7668769 | Baker et al. | Feb 2010 | B2 |
7813937 | Pathria et al. | Oct 2010 | B1 |
7835919 | Bradley et al. | Nov 2010 | B1 |
7853469 | Maitland et al. | Dec 2010 | B2 |
8015108 | Haggerty et al. | Sep 2011 | B2 |
8027439 | Zoldi et al. | Sep 2011 | B2 |
8036981 | Shirey et al. | Oct 2011 | B2 |
8041597 | Li et al. | Oct 2011 | B2 |
8090648 | Zoldi et al. | Jan 2012 | B2 |
8458069 | Adjaoute | Jun 2013 | B2 |
8484301 | Wilson et al. | Jul 2013 | B2 |
8548137 | Zoldi et al. | Oct 2013 | B2 |
8555077 | Davis et al. | Oct 2013 | B2 |
8561007 | Challenger et al. | Oct 2013 | B2 |
8572736 | Lin | Oct 2013 | B2 |
8744979 | Sundelin et al. | Jun 2014 | B2 |
8805737 | Chen et al. | Aug 2014 | B1 |
9264442 | Bart et al. | Feb 2016 | B2 |
9400879 | Tredoux et al. | Jul 2016 | B2 |
9721296 | Chrapko | Aug 2017 | B1 |
9898741 | Siegel et al. | Feb 2018 | B2 |
10339606 | Gupta et al. | Jul 2019 | B2 |
20020188533 | Sanchez et al. | Dec 2002 | A1 |
20020194119 | Wright et al. | Dec 2002 | A1 |
20030009495 | Adjaoute | Jan 2003 | A1 |
20030084449 | Chane et al. | May 2003 | A1 |
20030158751 | Suresh et al. | Aug 2003 | A1 |
20040073634 | Haghpassand | Apr 2004 | A1 |
20040111363 | Trench et al. | Jun 2004 | A1 |
20040153555 | Haverinen et al. | Aug 2004 | A1 |
20040225473 | Aoki et al. | Nov 2004 | A1 |
20060041464 | Powers et al. | Feb 2006 | A1 |
20060149674 | Cook et al. | Jul 2006 | A1 |
20060212350 | Ellis et al. | Sep 2006 | A1 |
20070067853 | Ramsey | Mar 2007 | A1 |
20070112667 | Rucker | May 2007 | A1 |
20070124246 | Lawyer et al. | May 2007 | A1 |
20070174164 | Biffle et al. | Jul 2007 | A1 |
20070174214 | Welsh et al. | Jul 2007 | A1 |
20070239604 | O'Connell et al. | Oct 2007 | A1 |
20080086365 | Zollino et al. | Apr 2008 | A1 |
20080104101 | Kirshenbaum et al. | May 2008 | A1 |
20080162259 | Patil et al. | Jul 2008 | A1 |
20080281743 | Pettit | Nov 2008 | A1 |
20090307028 | Eldon et al. | Dec 2009 | A1 |
20100027527 | Higgins et al. | Feb 2010 | A1 |
20100082751 | Meijer et al. | Apr 2010 | A1 |
20100115610 | Tredoux et al. | May 2010 | A1 |
20100125470 | Chisholm | May 2010 | A1 |
20100191634 | Macy | Jul 2010 | A1 |
20100228656 | Wasserblat et al. | Sep 2010 | A1 |
20100305993 | Fisher | Dec 2010 | A1 |
20110016041 | Scragg | Jan 2011 | A1 |
20110035440 | Henkin et al. | Feb 2011 | A1 |
20110055196 | Sundelin et al. | Mar 2011 | A1 |
20110055264 | Sundelin et al. | Mar 2011 | A1 |
20110238566 | Santos | Sep 2011 | A1 |
20110258049 | Ramer et al. | Oct 2011 | A1 |
20110276468 | Lewis et al. | Nov 2011 | A1 |
20110307382 | Siegel et al. | Dec 2011 | A1 |
20120047072 | Larkin | Feb 2012 | A1 |
20120137367 | Dupont et al. | May 2012 | A1 |
20120203698 | Duncan et al. | Aug 2012 | A1 |
20120226613 | Adjaoute | Sep 2012 | A1 |
20130018796 | Kolhatkar et al. | Jan 2013 | A1 |
20130204755 | Zoldi et al. | Aug 2013 | A1 |
20130305357 | Ayyagari et al. | Nov 2013 | A1 |
20140082434 | Knight et al. | Mar 2014 | A1 |
20140149128 | Getchius | May 2014 | A1 |
20140180974 | Kennel et al. | Jun 2014 | A1 |
20140279803 | Burbank et al. | Sep 2014 | A1 |
20150046224 | Adjaoute | Feb 2015 | A1 |
20150161609 | Christner | Jun 2015 | A1 |
20150193263 | Nayyar et al. | Jul 2015 | A1 |
20150279155 | Chun et al. | Oct 2015 | A1 |
20150348042 | Jivraj et al. | Dec 2015 | A1 |
20160260102 | Nightengale et al. | Sep 2016 | A1 |
20170006141 | Bhadra | Jan 2017 | A1 |
20170083386 | Wing et al. | Mar 2017 | A1 |
20170270534 | Zoldi et al. | Sep 2017 | A1 |
20170347283 | Kodaypak | Nov 2017 | A1 |
20180040064 | Grigg et al. | Feb 2018 | A1 |
20180048710 | Altin | Feb 2018 | A1 |
20180151045 | Kim et al. | May 2018 | A1 |
20180182029 | Vinay | Jun 2018 | A1 |
20180208448 | Zimmerman et al. | Jul 2018 | A1 |
20180253657 | Zhao et al. | Sep 2018 | A1 |
20190156417 | Zhao et al. | May 2019 | A1 |
20190213498 | Adjaoute | Jul 2019 | A1 |
20190236695 | McKenna et al. | Aug 2019 | A1 |
20190250899 | Riedl et al. | Aug 2019 | A1 |
20190265971 | Behzadi et al. | Aug 2019 | A1 |
20190278777 | Malik et al. | Sep 2019 | A1 |
Number | Date | Country |
---|---|---|
4230419 | Mar 1994 | DE |
0647903 | Apr 1995 | EP |
0631453 | Dec 2001 | EP |
9406103 | Mar 1994 | WO |
9501707 | Jan 1995 | WO |
9628948 | Sep 1996 | WO |
9703533 | Jan 1997 | WO |
9832086 | Jul 1998 | WO |
Entry |
---|
Office Action From U.S. Appl. No. 16/168,566 (dated Mar. 4, 2020). |
Office Action From U.S. Appl. No. 14/522,463 (dated Mar. 24, 2020). |
Office Action From U.S. Appl. No. 16/205,909 (dated Apr. 22, 2020). |
Office Action From U.S. Appl. No. 16/398,917 (dated Mar. 11, 2020). |
Office Action From U.S. Appl. No. 16/369,626 (dated Jun. 2, 2020). |
RAID, Feb. 28, 2014, www.prepressure.com, printed through www.archive.org (Year: 2014). |
Office Action From U.S. Appl. No. 14/673,895 (dated Oct. 30, 2015). |
Office Action From U.S. Appl. No. 14/673,895 (dated Feb. 12, 2016). |
Office Action From U.S. Appl. No. 14/673,895 (dated Jul. 14, 2017). |
Office Action From U.S. Appl. No. 14/673,895 (dated Oct. 2, 2017). |
Office Action From U.S. Appl. No. 14/690,380 (dated Jul. 15, 2015). |
Office Action From U.S. Appl. No. 14/690,380 (dated Dec. 3, 2015). |
Office Action From U.S. Appl. No. 14/690,380 (dated Jun. 30, 2016). |
Office Action From U.S. Appl. No. 14/690,380 (dated Nov. 17, 2016). |
Office Action From U.S. Appl. No. 14/690,380 (dated Jun. 27, 2017). |
Office Action From U.S. Appl. No. 14/690,380 (dated Nov. 20, 2017). |
“10 Popular health care provider fraud schemes” by Charles Piper, Jan./Feb. 2013, FRAUD Magazine, www.fraud-magazine.com. |
Report to the Nations on Occupational Fraud and Abuse, 2012 Global Fraud Study, copyright 2012, 76 pp., Association of Certified Fraud Examiners, Austin, TX. |
Big Data Developments in Transaction Analytics, Scott Zoldi, Credit Scoring and Credit Control XIII Aug. 28-30, 2013 Fair Isaacs Corporation (FICO). |
Credit card fraud detection using artificial neural networks tuned by genetic algorithms, Dissertation: Carsten A. W. Paasch, Copyright 2013 Proquest, LLC. |
Fraud Detection Using Data Analytics in the Healthcare Industry, Discussion Whitepaper, ACL Services Ltd., (c) 2014, 8pp. |
Fraud Detection of Credit Card Payment System by Genetic Algorithm, K.RamaKalyani, D. UmaDevi Department of Computer Science, Sri Mittapalli College of Engineering, Guntur, AP, India., International Journal of Scientific & Engineering Research vol. 3, Issue 7, Jul. 2012 1, ISSN 2229-5518. |
Healthcare Fraud Detection, http://IJINIIW.21ct.com'solutions/healthcare-fraud-detection/, (c) 2013 21CT, Inc. |
Prevent Real-time fraud prevention, brochure, Brighterion, Inc. San Francisco, CA. |
“Agent-Based modeling: Methods and Techniques for Simulating Human Systems”, Eric Bonabeau, Icosystem Corporation, 545 Concord Avenue, Cambridge, MA 02138, 7280-7287; PNAS; May 14, 2002; vol. 99; suppl. 3; www.pnas.org/cgi/doi/10.1073/pnas.082080899. |
Office Action From U.S. Appl. No. 14/454,749 (dated Feb. 3, 2017). |
Office Action From U.S. Appl. No. 14/514,381 (dated Dec. 31, 2014). |
Office Action From U.S. Appl. No. 14/514,381 (dated May 13, 2015). |
Office Action From U.S. Appl. No. 14/514,381 (dated Jan. 10, 2018). |
Office Action From U.S. Appl. No. 14/514,381 (dated Apr. 2, 2018). |
Office Action From U.S. Appl. No. 14/815,848 (dated Sep. 30, 2015). |
Office Action From U.S. Appl. No. 14/815,848 (dated Mar. 14, 2016). |
Office Action From U.S. Appl. No. 14/815,934 (dated Sep. 30, 2015). |
Office Action From U.S. Appl. No. 14/815,934 (dated Feb. 11, 2016). |
Office Action From U.S. Appl. No. 14/815,934 (dated Sep. 23, 2016). |
Office Action From U.S. Appl. No. 14/815,934 (dated Apr. 7, 2017). |
Office Action From U.S. Appl. No. 14/815,940 (dated Oct. 1, 2015). |
Office Action From U.S. Appl. No. 14/815,940 (dated Dec. 28, 2017). |
Office Action From U.S. Appl. No. 14/929,341 (dated Dec. 22, 2015). |
Office Action From U.S. Appl. No. 14/929,341 (dated Feb. 4, 2016). |
Office Action From U.S. Appl. No. 14/929,341 (dated Aug. 19, 2016). |
Office Action From U.S. Appl. No. 14/929,341 (dated Jul. 31, 2018). |
Office Action From U.S. Appl. No. 14/938,844 (dated Apr. 11, 2016). |
Office Action From U.S. Appl. No. 14/938,844 (dated Jan. 25, 2017). |
Office Action From U.S. Appl. No. 14/938,844 (dated May 1, 2017). |
Office Action From U.S. Appl. No. 14/938,844 (dated Aug. 23, 2017). |
Office Action From U.S. Appl. No. 14/935,742 (dated Mar. 2, 2016). |
Office Action From U.S. Appl. No. 14/935,742 (dated Sep. 22, 2016). |
Office Action From U.S. Appl. No. 14/935,742 (dated Mar. 29, 2017). |
Office Action From U.S. Appl. No. 14/935,742 (dated May 31, 2017). |
Office Action From U.S. Appl. No. 14/941,586 (dated Jan. 5, 2017). |
Office Action From U.S. Appl. No. 14/941,586 (dated May 2, 2017). |
Office Action From U.S. Appl. No. 14/956,392 (dated Feb. 2, 2016). |
Office Action From U.S. Appl. No. 14/956,392 (dated Mar. 28, 2016). |
Office Action From U.S. Appl. No. 14/956,392 (dated Nov. 3, 2016). |
Office Action From U.S. Appl. No. 14/956,392 (dated May 3, 2017). |
Office Action From U.S. Appl. No. 14/986,534 (dated May 20, 2016). |
Office Action From U.S. Appl. No. 14/986,534 (dated Sep. 7, 2017). |
Office Action From U.S. Appl. No. 14/517,771 (dated Jul. 15, 2015). |
Office Action From U.S. Appl. No. 14/517,771 (dated Dec. 31, 2015). |
Office Action From U.S. Appl. No. 14/517,771 (dated Sep. 8, 2016). |
Office Action From U.S. Appl. No. 14/517,771 (dated Sep. 20, 2018). |
Office Action From U.S. Appl. No. 14/517,863 (dated Feb. 5, 2015). |
Office Action From U.S. Appl. No. 14/517,863 (datedAug. 10, 2015). |
Office Action From U.S. Appl. No. 14/675,453 (dated Jun. 9, 2015). |
Office Action From U.S. Appl. No. 14/517,872 (dated Jan. 14, 2015). |
Office Action From U.S. Appl. No. 14/517,872 (dated Jul. 31, 2015). |
Office Action From U.S. Appl. No. 14/520,361 (dated Feb. 2, 2015). |
Office Action From U.S. Appl. No. 14/520,361 (dated Jul. 17, 2015). |
Office Action From U.S. Appl. No. 14/520,361 (dated Jul. 11, 2018). |
Office Action From U.S. Appl. No. 14/521,386 (dated Jan. 29, 2015). |
Office Action From U.S. Appl. No. 14/521,386 (dated Nov. 1, 2018). |
Office Action From U.S. Appl. No. 14/521,667 (dated Jan. 2, 2015). |
Office Action From U.S. Appl. No. 14/521,667 (dated Jun. 26, 2015). |
Office Action From U.S. Appl. No. 14/634,786 (dated Oct. 2, 2015). |
Office Action from U.S. Appl. No. 09/810,313 (dated Mar. 24, 2006). |
Office Action from U.S. Appl. No. 09/810,313 (dated Nov. 23, 2004). |
Office Action from U.S. Appl. No. 11/455,146 (dated Sep. 29, 2009). |
P.A. Porras and P.G. Neumann, “Emerald: Event Monitoring Enabling Responses to Anomalous Live Disturbances,” National Information Systems Security Conference, Oct. 1997. |
P.E. Proctor, “Computer Misuse Detection System (CMDSTM) Concpets,” SAIC Science and technology Trends, pp. 137-145, Dec. 1996. |
S. Abu-Hakima, M. ToLoo, and T. White, “A Multi-Agent Systems Approach for Fraud Detection in Personal Communication Systems,” Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI-97), pp. 1-8, Jul. 1997. |
Teng et al., “Adaptive real-time anomaly detection using inductively generated sequential patterns”, Proceedings of the Computer Society Symposium on research in Security and Privacy, vol. SYMP. 11, May 7, 1990, 278-284. |
Office Action from U.S. Appl. No. 14/522,463 (dated Oct. 3, 2019). |
Office Action from U.S. Appl. No. 14/522,463 (dated Jul. 18, 2019). |
Office Action From U.S. Appl. No. 16/205,909 (dated Dec. 27, 2019). |
Office Action From U.S. Appl. No. 16/205,909 (dated Sep. 27, 2019). |
Office Action From U.S. Appl. No. 16/398,917 (dated Sep. 26, 2019). |
Office Action From U.S. Appl. No. 15/947,790 (dated Nov. 18, 2019). |
Office Action From U.S. Appl. No. 14/525,273 (dated Jun. 26, 2018). |
Office Action From U.S. Appl. No. 14/525,273 (dated Feb. 9, 2015). |
Office Action From U.S. Appl. No. 14/525,273 (dated May 19, 2015). |
Office Action From U.S. Appl. No. 15/968,568 (dated Sep. 16, 2019). |
Office Action From U.S. Appl. No. 15/961,752 (dated Oct. 3, 2019). |
Clarke et al., Dynamic Forecasting Behavior by Analysts Theory and Evidence, 2005, Journal of Financial Economics (Year:2005). |
Data Compaction, 2013, Wikipedia, printed through www.archive.org (date is in the URL in YYYYMMDD format) (Year:2013). |
Data Consolidation, 2014, Techopedia, printed through www.archive.org (date is in the URL in YYYYMMDD format) (Year:2014). |
Data Mining Mar. 31, 2014. Wikipedia, Printed through www.archive.org, date is in the URL in YYYMMDD format (Year:2014). |
Data Warehousing—Metadata Concepts, Mar. 24, 2014, TutorialsPoint, printed through www.archive.org (Date is in the URP in YYYMMDD format) (Year:2014). |
Dave, Kushal, Steve Lawrence, and David M. Pennock. “Mining the peanut gallery: Opinion extration and semantic classification of product reviews.” Proceedings of the 12th international conference on WorldWide Web. ACM. 2003. |
I Need Endless Rolling List, 2007, QuinStreet, Inc. (Year: 2007). |
Office Action From U.S. Appl. No. 14/243,097 (dated Jun. 16, 2015). |
Office Action From U.S. Appl. No. 14/243,097 (dated Nov. 5, 2018). |
Office Action From U.S. Appl. No. 14/522,463 (dated Dec. 1, 2015). |
Office Action From U.S. Appl. No. 14/522,463 (dated Feb. 11, 2019). |
Office Action From U.S. Appl. No. 14/522,463 (dated Jun. 20, 2018). |
Office Action From U.S. Appl. No. 14/522,463 (dated Jun. 5, 2015). |
Office Action From U.S. Appl. No. 14/522,463 (dated Oct. 10, 2018). |
Office Action From U.S. Appl. No. 14/613,383 (dated Apr. 23, 2018). |
Office Action From U.S. Appl. No. 14/613,383 (dated Aug. 14, 2015). |
Office Action From U.S. Appl. No. 14/613,383 (dated Dec. 13, 2018). |
Yang,Yiming. “Expert network: Effective and efficient learning from human decisions in text categorization and retrieval.” Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval Springer-Verlag New York, Inc., 1994. |
“2000 Internet Fraud Statistics,” National Fraud Information Center web site, http://www.fraud.org, 2001. |
“Axent Technologies' NetProwlerTM and Intruder AlertTM”, Hurwitz Report, Hurwitz Group, Inc., Sep. 2000. |
“CBA 1994 Fraud Survey,” California Bankers Association web site, http://www.calbankers.com/legal/fraud.html, Jun. 1996. |
“Check Fraud Against Businesses Proliferates,” Better Business Bureau web site, http://www.bbb.org/library/checkfraud.asp, 2000. |
“Check Fraud Statistics,” National Check Fraud Center web site, http://www.ckfraud.org/statistics.html, Date Unkonwn. |
“Consumers and Business Beware of Advance Fee Loan and Credit Card Fraud,” Better Business Bureau web site, http://www.bbb.org/library/feeloan.asp, 20003. |
“CyberSource Fraud Survey,” CyberSource, Inc., web site, http://www.cybersource.com/solutions/risk_management/us_fraud_survey.xml, Date Unknown. |
“EFalcon Helps E-Merchants Control Online Fraud,” Financial Technology Insights Newsletter, HNC Software, Inc., Aug. 2000. |
“Guidelines to Healthcare Fraud,” National health care Anti-Fraud Association web site, http://www.nhcaa.org/factsheet_guideline.html, Nov. 19, 1991. |
“Health Insurance Fraud,” http://www.helpstopcrime.org, Date Unknown. |
“HIPPA Solutions: Waste, Fraud, and Abuse,” ViPS, Inc., web site, http://www.vips.com/hippa/combatwaste.html, 2001. |
“HNC Insurance Solutions Introduces Spyder Software for Healthcare Fraud and Abuse Containment,” HNC Software, Inc., press release, Dec. 4, 1998. |
“Homeowners Insurance Fraud,” http://www.helpstopcrime.org, Date Unknown. |
“Insurance Fraud: The Crime You Pay for,” http://www.insurancefraud.org/facts.html, Date Unknown. |
“PRISM FAQ”, Nestor, Inc., www.nestor.com, Date Unknown. |
“SET Secure Electronic Transaction Sepcification,” Book 1: Business Description, May 1997. |
“Telemarketing Fraud Revisited,” Better Business Bureau web site, http://www.bbb.org/library/tele.asp, 2000. |
“The Impact of Insurance Fraud,” Chapter 5, Ohio Insurance Facts, Ohio Insurance Institute, 2000. |
“VeriCompTM Claimant,” HNC Software, Inc., web site, 2001. |
“What is Insurance Fraud?,” http://www.helpstopcrime.rog, Date Unkonwn. |
“Wireless Fraud FAQ,” World of Wireless Communications web site, http://www.wow-com/consumer/faq/articles.cfm?textonly=1&ID=96, Date Unknown. |
“Workers Compensation Fraud,” http://www.helpstopcrime.org, Date Unknown. |
A. Aadjaoute, “Responding to the e-Commerce Promise with Non-Algorithmic Technology,” Handbook of E-Business, Chapter F2, edited by J. Keyes, Jul. 2000. |
A. Valdes and H. Javitz,“The SRI IDES Statistical Anomaly Detector,” May 1991. |
D. Anderson, T. Frivold, and A. Valdes, “NExt-Generation intrusion Detection Expert System (NIDES): A Summary,” SRI Computer Science Laboratory technical report SRI-CSL-95-07, May 1995. |
Debar et al., “Neural network Component for an Intrustion Detection System”, Proceedings for the Computer Society Symposium on Research in Security and Privacy, vol. SYMP.13, May 4, 1992, 240-250. |
Denault et al., “Intrusion Detection: approach and performance issues of the SECURENET system”, Computers and Security, 13 (1994), 495-508. |
John J. Xenakis, 1990, InformationWeek, 1990. n296,22. |
K. G. DeMarrais, “Old-fashioned check fraud still in vogue,” Bergen record Corp. web site, http://www.bergen.com/biz/savvy24200009242.htm, Sep. 24, 2000. |
M. B. Guard, “Calling Card Fraud—Travelers Beward!,” http://www.bankinfo.com/security/scallingcardhtml, Jun. 11, 1998. |
Maria Seminerio, Dec. 13, 1999, PC week, 99. |
Office Action from U.S. Appl. No. 09/810,313 (dated Jun. 22, 2005). |
Office Action From U.S. Appl. No. 16/264,144 (dated Oct. 16, 2020). |
Office Action From U.S. Appl. No. 16/168,566 (dated Dec. 18, 2020). |
Office Action From U.S. Appl. No. 15/866,563 (dated Nov. 27, 2020). |
Office Action from U.S. Appl. No. 16/424,187 (dated Feb. 26, 2021). |
Office Action from U.S. Appl. No. 16/226,246 (dated Dec. 15, 2020). |
Ex Parte Quayle Action from U.S. Appl. No. 16/369,626 (dated Jan. 7, 2021). |
Office Action From U.S. Appl. No. 16/168,566 (dated Sep. 9, 2020). |
Office Action From U.S. Appl. No. 16/226,246 (dated Aug. 4, 2020). |
Office Action From U.S. Appl. No. 16/184,894 (dated Sep. 21, 2020). |
Office Action From U.S. Appl. No. 16/592,249 (dated Sep. 14, 2020). |
Office Action From U.S. Appl. No. 16/601,226 (dated Sep. 2, 2020). |
Office Action From U.S. Appl. No. 16/674,980 (dated Sep. 3, 2020). |
Office Action From U.S. Appl. No. 16/856,131 (dated Sep. 24, 2020). |
Office Action From U.S. Appl. No. 16/679,819 (dated Sep. 25, 2020). |
Number | Date | Country | |
---|---|---|---|
20200111565 A1 | Apr 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14986534 | Dec 2015 | US |
Child | 16677458 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14815848 | Jul 2015 | US |
Child | 14986534 | US | |
Parent | 14514381 | Oct 2014 | US |
Child | 14815848 | US | |
Parent | 14521667 | Oct 2014 | US |
Child | 14986534 | US | |
Parent | 14815934 | Jul 2015 | US |
Child | 14521667 | US | |
Parent | 14517771 | Oct 2014 | US |
Child | 14815934 | US | |
Parent | 14517872 | Oct 2014 | US |
Child | 14517771 | US | |
Parent | 14935742 | Nov 2015 | US |
Child | 14517872 | US |