Not Applicable.
The present invention is in the technical field of Analytical Technology focusing on Healthcare Improper Payment Prevention and Detection. Improper payments are hereby defined, collectively, as those payments containing or potentially containing individual cost dynamics, including but not limited to, fraud, abuse, over-servicing, over-utilization, waste or errors.
A plurality of external and internal data and predictive models, empirical Decision Management Strategies and decision codes are utilized in concert within a Software as a Service Risk Management System to identify and investigate claims that are potentially fraudulent, contain abuse, or over-servicing, over-utilization, waste or errors, or claims that are submitted by a potentially fraudulent, abusive or wasteful provider or healthcare merchant. The claim payments are researched, analyzed, reported on and subjected to empirical probabilistic strategy management procedures, actions or treatments.
More particularly, the present invention utilizes a research, analysis, empirical probabilistic strategy management and reporting software application system in order to optimally facilitate human interaction with and automated review of hundreds of millions of healthcare claims or transactions, or hundreds of thousands of providers or healthcare merchants that have been determined to be at high-risk for fraud and abusive practices or over servicing, wasteful or perpetrators of errors.
The invention is intended for use by government payers or merchants, defined as public sector, and private payers or merchants, defined as private sector, healthcare organizations, as well as any healthcare intermediary. Healthcare intermediary is defined as any entity that accepts healthcare data or payment information and completes data aggregation or standardization, claims processing or program administration, applies rules or edits, stores data or offers data mining software, performs address or identity analysis or credentialing, offers case management or workflow management or performs investigations for fraud, abuse, waste or errors or any other entity which handles, evaluates, approves or submits claim payments through any means. The invention can also be used by healthcare merchants or self-insured employers to reduce improper payments.
The invention can be applied within a plurality of healthcare segments such as Hospital, Inpatient Facilities, Outpatient Institutions, Physician, Pharmaceutical, Skilled Nursing Facilities, Hospice, Home Health, Durable Medical Equipment and Laboratories. The invention is also applicable to a plurality of medical specialties, such as family practice, orthopedics, internal medicine and dermatology, for example. The invention can be deployed in diverse data format environments and in separate or a plurality of geographies, such as by zip code, county, metropolitan statistical area, state or healthcare processor region. This application can integrate within multiple types of claims processing systems, or systems similar in logical structure to claims process flows to enable the review for law enforcement, investigators, analysts and business experts to interact with the suspect providers, healthcare merchants, claims, transactions or beneficiaries, in order to:
Healthcare fraud is a major policy concern. In a Senate Finance Committee hearing, Chairman Baucus (D-Mont.) stressed the need for measurable results in fighting fraud, which costs taxpayers an estimated $60 billion each year.i Senator Coburn (R-OK), an advocate for paring down deficits and debt into the future, stressed in a NPR interview, that reducing Medicare fraud is the first step in reducing the deficit.ii
The increase in improper payments associated with fraud, abuse, waste and errors will continue to escalate until core functional issues are addressed such as disparate systems, lack of meaningful analytics, ability to measure performance and lack of a coordinated risk management approach to attacking individual cost dynamics
Steps have been taken over the past several years in an attempt to attack rising healthcare expenditures due to healthcare fraud—but with minimal results. For example, The Tax Relief and Health Care Act of 2006 required Congress to implement a permanent and national Recovery Audit Contractor (RAC) program by Jan. 1, 2010. The national RAC program was the outgrowth of a successful demonstration program launched by the Centers for Medicare and Medicaid Services (CMS) that used RAC's to identify Medicare overpayments and underpayments to health care providers and suppliers in California, Florida, New York, Massachusetts, South Carolina and Arizona. The demonstration resulted in over $900 million in overpayments being returned to the Medicare Trust Fund between 2005 and 2008 and nearly $38 million in underpayments returned to health care providers.iii While providing necessary and incremental success in attacking over payments after implementation, vulnerabilities surround the program. Examples include, the focus on post-payment high-dollar overpayments, mostly to hospitals, that recover pennies on the dollar versus pre-payment, the lack of innovation and sophisticated targeting to identify perpetrators which ultimately causes a high-false positive rate among those providers and suppliers identified, the negative impact to providers as part of the audit and measurement process which ultimately increases their administrative costs because they need to hire more staff, accuracy of RAC determinations, and antiquated database capabilities.iv,v,vi It is difficult to ascertain the overall financial benefit of the program, depending upon whether the sources of the estimates are advocates for the RAC program, such as CMS, or adversaries of the RAC's, such as the American Hospital Association (AHA). The AHA is claiming significant appeals and overturned denials, while CMS presents minimal provider impact with maximum results. While the numbers quoted are distinctly different between CMS and AHA, both sides can agree that there is room for improvement to reduce negative impacts on good providers.
CMS continued its goal of reducing improper payments by launching Medically Unlikely Edits (MUE) in January of 2007. A MUE for a HCPCS/CPT (procedure) code is the maximum units of service that a provider would report under most circumstances for a single beneficiary on a single date of service.vii These edits followed earlier National Correct Coding Initiative (NCCI) edits implemented by CMS in the mid-1990's. The NCCI edits identify where two procedures cannot be performed for the same patient encounter because the two procedures are mutually exclusive based on anatomic, temporal, or gender considerations.viii While both edit types are progressive in identifying payment errors, neither identifies fraud and abuse schemes perpetrated by providers or organized fraud rings.
In 2009, the Department of Justice (DOJ) and Health and Human Services (HHS) announced the creation of the Health Care Fraud Prevention and Enforcement Action Team (HEAT). With the creation of the HEAT team, the fight against Medicare fraud became a Cabinet-level priority.ix These law enforcement professionals took the war on reducing Medicare fraud to the doorstep of the individual perpetrators and organized fraud rings. For full-year 2011, strike force operations had charged a record number of 323 defendants, who allegedly collectively billed the Medicare program more than $1 billion. Strike force teams secured 172 guilty pleas, convicted 26 defendants at trial and sentenced 175 defendants to prison. The average prison sentence in strike force cases in FY 2011 was more than 47 months.x
In mid-2011, in an effort to bring sophistication and improvement to fraud prevention, a $77 million computer system was launched to stop Medicare fraud before it happens—defined as the Fraud Prevention System (FPS). Unfortunately, the program had only prevented just one suspicious payment by Christmas 2011—for approximately $7,000. Frustration in the lack of progress in attacking Medicare fraud and abuse by this expensive new program was outwardly promulgated by Senator Carper (D-DE) in his quote in February 2012, “Medicare has got to explain to us clearly that they are implementing the program, that their goals are well-established, reasonable, achievable, and they're making progress.”xi
More recently the Government Accountability Office (GAO) reported that Private contractors received $102 million to review Medicaid fraud data, yet had only found about $20 million in overpayments since 2008. The audits were found to be so ineffective they were stopped or put on hold, according to a report by the Government Accountability Office. The agency studied Medicaid audits performed by 10 companies. The audits relied on Medicaid data that was often missing basic information, such as beneficiary's names or addresses and provider ID numbers, experts testified during a Senate hearing.xii
In addition to struggling to find effective methods to reduce Medicare fraud, an additional barrier has arisen. That is, in order to achieve results that maximize return on investment from capital dollars invested, measuring performance is an administrative obstacle. Neither CMS nor members of the Senate can get an accurate gage on how programs are performing separately or collectively. An example of this issue was highlighted in a hearing on Jul. 12, 2011, where Senator Brown (R-MA) inquired whether $150 million in expenditures for program integrity systems had been good investments—when no outcome performance metrics had been established to measure their actual benefit.xiii
A clear message that occurs throughout the select chronology of events outlined above is the amount of potential savings is massive, but there are many obstacles address before significant benefits or savings be realized in reducing annual healthcare expenditures.
Congressional testimony, agency oversight reports, government program communications and requests for proposals (RFP's), as well as peer-to-peer conversations have utilized several phrases to imply the issue associated with escalating healthcare costs—to the point where multiple descriptions have blurred the issue:
1) Fraud
2) Fraud, Waste and Abuse
3) Waste and Over-Utilization
4) Improper Payments
5) Payment Errors
6) Over Payments
While used generically and interchangeably—fraud, abuse, waste, over-servicing, over-utilization and errors are not all the same cost dynamic in financial terms. Each dynamic is different in terms of intent, financial impact, difficulty to identify and approach to pursue savings. It is impossible to address their negative influence until they are clearly defined at the lowest common denominator—the individual cost dynamic.
For this invention, independent sources are used to define each cost dynamic. Sources include the GAO from 2011 testimony Before the Subcommittee on Federal Financial Management, Government Information, Federal Services, and International Security, Committee on Homeland Security and Governmental Affairs, Donald Berwick in a April 2012 JAMA white paper, and the Congressional Research Service in a Report for Congress in 2010:
Examples include incomplete or duplicate claims, claims where diagnosis codes do not match procedure codes, and unallowable code combinations, which are typically identified by claim edits.xix
Risk management is the identification, assessment, and prioritization of risks followed by coordinated and economical application of resources to minimize, monitor, and control the probability and/or impact of unfortunate eventsxx, in this case healthcare fraud, abuse and waste, over-servicing, over-utilization and errors. The concept of risk management was pioneered by the financial services industry almost 30 years ago to combat credit card fraud, which was, at that time, accelerating through the use of electronic payment technologies.
The impact of implementing fraud prevention predictive analytics in a risk management design for the credit card industry was a 50% reduction in fraud within five years of market usagexxi, even with queuing or referring odds of 3:1 on cases to be workedxxii. The value proposition of a risk management solution is in its design and foundation. It utilizes proven technology that mitigates fraud, abuse and waste with a cost structure over 20 times more economical than healthcare solutions today. According to the study by McKinsey, automated transaction technology from financial services has less than 1% in defects and manual review, as compared to healthcare technology that is estimated at up to 40%.xxiii
The typical steps for risk management are broken down in 6 steps. They include:xxiv
Leaders have several alternatives for the management of risk, including avoiding, assuming, reducing, or transferring the risks. This invention describes an Automated Healthcare Risk Management System to target and prevent losses from fraud, abuse, waste and errors.
A healthcare risk management design is a systematic approach that incorporates multiple capabilities and services into an overall solution, versus a single capability, to minimize losses based upon the economics of the overall risk and financial benefit. It provides the ability for a one-to-one interaction with customers, to reduce losses from bad actors before they are paid, while at the same time mitigating negative interactions on good customers—in this case providers and beneficiaries.
Risk management is not about having a single capability to fight all issues, it is about the collective benefit of multiple capabilities in a single solution to control ALL types of cost dynamics such as fraud, abuse, waste and errors. A single model, a single dataset, or single set of edits cannot control costs for all four cost dynamics.
The Automated Healthcare Risk Management System utilizes Software as a Service technology to host Real-time Predictive Models, Risk Adjusted Provider Cost Index, Edit Analytics, Strategy Management, Managed Learning Environment, Contact Management, Forensic GUI, Case Management And Reporting System For Preventing And Detecting Healthcare Fraud, Abuse, Waste and Errors individually and uniquely.
Throughout this invention, each cost dynamic is referred to specifically when discussing individual approaches to attack and mitigate them individually. Generic comments around fraud, abuse, waste or errors will be referred to as an improper payment.
The present invention is in the technical field of Analytical Technology for Improper Payment Prevention and Detection. The invention focuses prevention and detection efforts on the highest risk and highest value providers, healthcare merchants, claims, transactions or beneficiaries for fraud, abuse, over-servicing, over-utilization, waste or errors. More particularly, it pertains to claims and payments submitted or reviewed by public sector markets such as Medicare, Medicaid and TRICARE, as well as the private sector market which consists of commercial enterprise claim payers such as Private Insurance Companies (Payers), Third Party Administrators (TPA's), Medical Claims Data Processors, Electronic Clearinghouses, Claims Integrity Organizations, Electronic Payment, Healthcare Intermediaries and other entities that process and pay claims to healthcare providers.
This invention pertains to identifying improper payments by providers, healthcare merchants and beneficiaries or collusion of any combination of each fore-mentioned, in the following healthcare segments:
Healthcare providers are here defined as those individuals, companies, entities or organizations that provide a plurality of healthcare services or products and submit claims for payment or financial gain in the healthcare industry segments listed in items 1-10 above. Healthcare beneficiaries are here defined as individuals who receive healthcare treatments, services or products from providers or merchants. Beneficiary is also commonly referred to as a “patient”. The beneficiary definition also includes individuals or entities posing as a patient, but are in fact not a legitimate patient and are therefore exploiting their role as a patient for personal or financial gain. Healthcare merchant is described as an entity or individual, not meeting the exact definition of a healthcare provider, but having the ability to offer services or products for financial gain to providers, beneficiaries or healthcare intermediaries through any channel, including, but not limited to retail store, pharmacy, clinic, hospital, internet or mail.
The present invention, defined as the Automated Healthcare Risk Management System for identifying Improper Payments, utilizes, for example, research, analysis, reporting, probability models or scores, cost or waste indexes, policy edits and empirical decision strategy management computer software application systems in order to facilitate human interaction with, and automated review of healthcare claims or transactions, providers, healthcare merchants or beneficiaries that have been determined to be at high risk for fraud, abusive, over-servicing, over-utilization, waste or errors.
The Automated Healthcare Risk Management System for Preventing And Detecting Healthcare Fraud, Abuse, Waste And Errors is a software application and interface that assists law enforcement, investigators and risk management analysts by focusing their research, analysis, strategy, reporting, prevention and detection efforts on the highest risk and highest value claims, providers, healthcare merchants or beneficiaries for fraud, abuse, over-servicing, over-utilization, waste or errors.
The objective of the invention is to provide effective fraud prevention and detection while improving efficiency and productivity for investigators. The Automated Healthcare Risk Management System for Healthcare Fraud, Abuse, Waste and Errors is connected to multiple large databases, which includes, for example, national and regional medical and pharmacy claims data, as well as provider, healthcare merchant and beneficiary historical information, universal identification numbers, the Social Security Death Master File, Credit Bureau data such as credit risk scores and/or a plurality of other external data and demographics, Identity Verification Scores and/or Data, Change of Address Files for Providers or Healthcare Merchants, including “pay to” address, or Patients/Beneficiaries, Previous provider, healthcare merchant or beneficiary fraud “Negative” (suppression) files or tags (such as fraud, provider sanction, provider discipline, provider retirement or provider licensure, etc.), Eligible Beneficiary Patient Lists and Approved Provider or Healthcare Merchant Payment Lists. It retrieves supporting views of the data in order to facilitate, simplify, enhance and implement the investigator's decisions, recommendations, strategies, reports and management treatments and actions. More specifically, the invention includes healthcare merchant and provider history, beneficiary or patient history, patient and provider interactions over time, provider diagnosis, actions, treatments and procedures across a patient spectrum, provider or segment cohort comparisons, reports and alternative empirical strategies for managing potentially fraudulent or abusive, over-servicing, over-utilization, waste or errors claims and their subsequent payments.
Provider, healthcare merchant, claim and beneficiary information is prioritized within the Automated Healthcare Risk Management System by differing probability levels or likelihood of improper payment risk and therefore require differing different levels of treatments or actions based upon economic spend and benefit or value, importing and utilizing:
Prior Art references interface software applications, for provider, beneficiary and claim payment-monitoring systems, summarized into the following categories:
Their central function, or primary responsibility, is manually reviewing output through an online browser, which may or may not include efficient navigation. Additional capabilities are sometimes provided with the afore-mentioned categories. Those categories may include the following, but typically not more than one:
Prior Art inventions are less focused on the end-users need for effective improper payment prevention and detection, with efficient resolution, than delivering components and capabilities that emulate and automate the already inefficient and ineffective environment, that currently exists today.
There is little consideration by prior art on how to maximize the business goals of the end user, which is to improve and maximize the identification of improper payments, savings, recoveries, business return and optimize capital invested in the business, while introducing efficiencies that lower defects, resources, staff and overall costs. Most Prior Art applications are designed for business analysts and statisticians to operate, versus meeting the needs nurses, physicians, medical investigators, law enforcement or adjustors within the healthcare industry whose goal is to investigate and have timely resolution to complex improper payment scenarios, versus wasting precious time to learn and perform laborious analysis to locate improper payments.
End-users require efficient resolution, without the need to learn statistics, submit or create custom queries to pull historical data or write or hard code rules to identify fraud or abuse or waste. In particular, Prior Art applications are for creating, viewing and visually analyzing detection results post payment, sometimes defined as descriptive statistics, where users are required to submit queries or run BI Tools to create population statistics, such as means, standard deviations or Z-Scores to compare performance of one observation to another population of its peers. Many times Prior Art references the use of hard copy and electronic reports, graphing capabilities such as Color Columns, Charts, Histograms, Bar Charts, geography maps and dot graphics for visual investigations.
More particular, Prior Art is designed as industry generic, specifically agnostic versions developed in one industry, such as telecom or financial services, for fraud and generically applied to multiple other industries, versus specifically developed and focused exclusively on preventing and detecting multiple healthcare improper payment types such as fraud, abuse, over-servicing, over-utilization, waste and errors. Prior Art tends to copy methods and capabilities from one industry and apply it to other industries without any thought to innovation or customization for that industry's issues or specific user needs and business objectives. Prior Art makes claims across multiple industries, including but not limited to, Credit Card Portfolio Management, Credit Card Fraud, Workman's Compensation Fraud, Healthcare Diagnosis or Healthcare Applications to monitor provider or patient behavior. One size does not fit all applications.
Prior Art does not consider integration of systems and capabilities on the front end, defined as input, nor how each system or capability must tie together on the back end, defined as output. Particularly, Prior Art rarely references Software As A Service (SAAS) as a simple means for integration. More importantly, end users are not considered for the final use and output, specifically contemplating how providers, healthcare merchants, claims or beneficiaries that are identified with improper payments such as fraud, abuse, over-servicing, over-utilization or waste, along with how research, actions or treatments are communicated from Prior Art payment monitoring systems efficiently and effectively to investigators. Prior Art does not consider how actions that are taken within the monitoring system are communicated back to legacy systems for actions upstream within the system or performance reporting.
Prior art does not directly discuss the integration and use of multiple data sources, for example the Social Security Death Master File, external Credit Bureau data such as credit risk scores and/or a plurality of other external data and demographics, Identity Verification Scores and/or Data, Change of Address Files for Providers or Healthcare Merchants, including “pay to” address, or Patients/Beneficiaries, Previous provider, healthcare merchant or beneficiary fraud “Negative” (suppression) files or tags (such as fraud, provider sanction, provider discipline or provider licensure, etc.), Eligible Beneficiary Patient Lists and Approved Provider or Healthcare Merchant Payment Lists.
Prior Art also does not consider the requirement to have real-time monitoring and multiple triggers that fire when thresholds are exceeded for potential perpetrators of improper payments.
Lastly, Prior Art does not consider a key component in any system, which is the feedback loop. It is required for both model and strategy enhancement as well as developing optimized decision strategies, contact management strategies, treatment and actions. The feedback loop is also a key component to measuring results and determining business return on investment. While most Prior Art references standard or ad hoc reporting, it doesn't reference the capability to measure the true incremental benefit of new models, new strategies, new data, new variables, new treatments, new actions or alternative investigator staffing models compared to the current state, which is the control.
Prior Art for workstations, workflow management, case management or queue management monitor only fraud through interface software applications. Additionally, Prior Art mirrors or imitates what was previously done in a paper intensive environment or in a manual, human workflow management system to identify fraud. These types of workstations reference virtually no research, analysis, strategy management capabilities and only basic or standard reports. These are not intelligent systems, but “paper replacement” management “workstations” which offer less sophistication and merely automate what was previously done manually or on paper forms to target fraud—not the broad definition of improper payments which includes multiple cost dynamics such as fraud, abuse, over-servicing, over-utilization, waste and errors. In addition, Prior Art does not address, specifically, improper payments from multiple dimensions, including segment, provider, healthcare merchant, claim and beneficiary.
Prior Art doesn't consider law enforcement's and investigators need to focus on additional compromise points such as enrollment or identity credentialing, in addition to improper payments. There are multiple categories of risk-types that exist within healthcare that correspond to the multiple points of compromise with the healthcare value chain. The majority of risk and overpayment cost originates from the transaction category, and is perpetrated primarily by providers.xxv See the table below for examples summarizing the categories of compromise that a layman familiar in this field must objectively consider for their risk management solution.
Using computer software programs to automate and replicate existing, manual paper based fraud claim review results in only a small number or fraction of claims that can be reviewed at any given time. Specifically, if computers are used to simply automate current processes, then rather than reviewing millions of potentially improper payments, it is still only possible to inefficiently review a very small number of potentially fraud claims per analyst or investigator per day. This issue becomes very apparent when a large payer, may require 4 million claims to be reviewed in a single day. This cumbersome process also means that there are no coordinated, sophisticated review capabilities for not only fraud, but also abuse, over-servicing, over-utilization, waste and errors across multiple geographies, across time, across beneficiary services or even within specialty groups. Prior Art infers an end-state, where a decision is already known, not an intelligent system that automatically targets, identifies and presents suspects to an investigator to work.
Prior Art describes no “managed learning environment”, within the review or assessment process to effectively and proactively, test new actions or treatments and effectively measure the amount of incremental improper payment cost dynamic components, such as fraud, abuse, over-servicing, over-utilization, waste or errors, identified to optimize business return on investment. A managed learning environment is critical for monitoring the performance of each scoring model, characteristic, data source, strategy, action and treatment to allow law enforcement or investigators to optimize each of their strategies or approaches to prevent and detect improper payments as well as adjust to new types, techniques or behaviors of perpetrators—such as identity fraud, collusion, organized crime and rings, providers, healthcare merchants and beneficiaries. A managed learning environment provides the real-time capability to cost-effectively present only the highest-risk claims and highest value providers, healthcare merchants, claims or beneficiaries to investigative analysts to systematically decline or quickly research and take action on high-risk healthcare improper payments. A key requirement of any business is ascertaining, or measuring the effectiveness of capital spent versus the individual cost dynamics compromising improper payments prevented and detected, sometimes referred to as return on investment. Particularly, there is not an ability to quickly and optimally identify emerging patterns of fraud, abuse, over-servicing, over-utilization, waste or errors, or adjust to changes in existing perpetrator behavior without understanding your cost and return trade offs. Prior art does not address either an ongoing managed learning environment or capabilities for measuring and optimizing business return.
Prior Art does not consider how actions that are taken within their monitoring system are communicated back to legacy systems for investigative action revisions upstream within the system. Lastly, Prior Art does not consider a key component in any monitoring system, which is the feedback loop. It is required for both model and strategy enhancement as well as developing optimized decision strategies, contact management strategies, treatment and actions. The feedback loop is also a key component to measuring results and determining business return on investment. While most Prior Art references standard or ad hoc reporting, it doesn't reference the capability to measure the incremental benefit of new models, new strategies, new treatment, new actions compared to the current state, which is the control.
Prior Art outlines Data Ad Hoc Queries, Business Intelligence Tools, Data Mining or Data Analytics, Preprocessing or Rules, Decision Strategy Management Capabilities and Report Tree capabilities that may also be combined, or run independent of, interface software applications for monitoring providers, beneficiaries and healthcare claim payments for fraud or abuse.
Data Ad Hoc Queries, Business Intelligence Tools, Data Mining or Data Analytics have several limitations:
Prior Art also describes monitoring system capabilities that complete pre-processing for errors, or have decision strategy management rules, parameters, trees, tree reports, filters or policies that are used to identify fraud or abuse. Categorically, these capabilities are all some form of rules which are both inefficient and ineffective, even though they are intended to help the claim payers or users to determine which of the claims submitted by the providers are within acceptable policies, guidelines, fraud or abuse risk. These approaches do not directly identify, evaluate and quantify ALL cost dynamics associated with improper payments.
Although Prior Art may have the opportunity to import what is generically defined as a predictive model score(s), here defined as scoring, to monotonically rank order claims to be reviewed, these capabilities do not take advantage of the research, analysis and empirical and adaptive strategy management capabilities that modern scoring enables. Particularly these capabilities or applications rely on judgmental, anecdotal and sub-optimal rules, trees, tree reports, filters and policies to manage the investigative review process, in combination with scoring. Additional websites, screens or queues are sometimes required to be created by users using trees or tree reports, in an attempt to create efficiency and effectiveness, but which further perpetuates the issues that are trying to be solved for, effective and efficient identification and resolution of improper payments by investigators, for example law enforcement, nurses, physicians, medical investigators or adjustors within the healthcare industry whose goal is to find timely resolution to complex improper payment scenarios.
In order to manage risk and prevent and detect improper payments on the billions of healthcare claims per year, investigators are unable to focus on an optimal or manageable subset the riskiest, most valuable payments, or ascertain business return. It doesn't matter whether sub-second, state of the art processing platforms or mainframe computer systems are used to conduct reviews because both are sub-optimal for identifying improper payments effectively and resolving it efficiently with decision strategy management rules, parameters, trees, tree reports, filters or hard coded policies.
Prior Art identifies an explosion of manually programmed rules to implement policies as well as detect only fraud and abuse, either independent of monitoring systems or within monitoring systems. During a review process, hundreds of rules may have been breached, or fired, to identify a claim or provider to be reviewed. These large number of rule exceptions cause several major problems for the investigator during the review processes:
Prior Art describes Business Intelligence (BI) Tools, Data Mining and Data Analytics and Database query capabilities combined with workstations, workflow management, case management or queue management interface software applications for monitoring healthcare providers, beneficiaries and claim payments. Viewing data is their central function, with SQL type query capabilities or enhanced graphing for traversing through data, storing data models and ad hoc data driven analysis. Generic appending or accessing scoring, typically from parametric predictive models, writing or submitting computer programs, creating custom web sites or allowing business analysts to create judgmental report trees are recent additions to these new categories. The Business Intelligence (BI) tools or data queries are utilized to create ad hoc queries or programs, which emulate rules, to identify pockets or segments of potential fraud by accessing a database. None create the environment for a feedback loop to measure performance or improve on effectiveness or address the remaining cost dynamics associated with improper payments.
Prior Art describes parametric measurements, such as attribute means, medians, standard deviations or Z-scores, combined with the queries to ineffectively identify outliers. High false positive rates associated with parametric methods used in healthcare or the reliance on ‘families” of supervised modeling techniques included with the prior art causes investigator ineffectiveness. Additionally, Prior Art also discusses computer implemented methods of analyzing results of a predictive model applied to data pertaining to a plurality of entities displaying rank-ordering of at least some of the entities according to their variance from the mean or median or scores and for each of the displayed entities. Database output is accessed visually using a workstation or programs that populate generic or custom queues, web sites or reports to be accessed by investigators.
Prior Art references a hyper-link to a report tree, which contains a plurality of hyper-linked reports. Report trees systematically emulate the paper environment. The output includes a plurality of reports compromising: a suspect list of entities, each entities activity by a selected categorization of the entities activity, distribution chart, subset reports, and a peer group comparison report. Prior Art approach has the same approach of rules, both from a processing perspective and an ability for an investigator to improve efficiency and information investigator transparency.
It is virtually impossible to apply individual strategies when using rules and it is impossible to report results or effectiveness of rules in detecting fraud and abuse because there is no way to evaluate how effective an individual rule is in detecting fraud or abuse, especially when fraud and abuse each have subtle behavioral differences. Particularly, this is not a focused risk management platform, but a workstation display capability based upon rules outputting data from a database.
Suppose, for example, there are 10,000 rules, not an uncommon number, used to implement claim payer policies and to detect fraud, abuse or improper payments. Suppose also, that a claim to be paid is sent to a fraud investigator for review because 150 of the rule criteria or parameters were exceeded. Suppose further that the claim turns out to be fraudulent. There is no way to identify or report which variable or rule “caused” the fraud claim to be “detected”. Prior Art does not describe an accurate method to report overall performance of the individual rules. This same condition exists for implementation of new policies or procedures. It is impossible to determine which rules are effective at testing and implementing new payer claim procedures or policies when hundreds of rule exceptions might be associated with each potential new or changed procedure. This statement is true whether predictive model scores or individual characteristics are used with the rules or report trees. This issue is further perpetuated when looking at multiple cost dynamics for improper payments such as fraud, abuse, over-servicing, over-utilization, waste or errors.
Overall, Prior Art describes interface software applications, such as workstations, workflow management systems or case management systems with database capabilities which are generally driven by judgmental decision strategy rules, trees, filters, ad hoc database queries and report tree logic. This “passive” approach and cumbersome detection and case management activity is inefficient, even if defined as real-time. Rules, filters, decision or report trees, database queries and parameter driven workstations suffer the same weaknesses in fraud risk case management workstations as they do in fraud detection, even if they include predictive models and real time processing. More particularly, decision strategy management rules-based approaches, including trees and report trees, have the following weaknesses when used in workflow management systems, case management or queue management systems:
As described earlier, Prior Art references the possibility to combine predictive model scores, with associated reason codes, with Business Intelligence (BI) Tools or database queries. Particularly, Prior Art references parametric methods or supervised techniques such as regression, multiple regression, neural nets or clusters and behavioral profiling techniques. Prior Art sometimes describes the use of probability tables based upon historical database performance. Prior Art is describing a redundant version of what is used in Financial Services—credit card, without customization for meeting the needs of healthcare investigators.
Prior Art, also references unsupervised techniques using database analysis or data queries. Particularly, Prior Art refers to Z-Scores models as an input to decision management strategy trees. All of these model methodologies create the same type of ineffectiveness and inefficiency that was introduced with rules and edits. Parametric methods, or outlier analysis, combined with rules, create inaccuracies based upon both sides of a data distribution. This is because of limitations of supervised modeling approaches and Z-scores in ability to only segment a population into the worst 0.5%-1.0% of risk. More particularly, the methodology described neutralizes any rank order capability using rules below the top 1%.
Documenting that Prior Art has the ability to rank order risk within the rules or trees does not make Business Intelligence (BI) Tools, decision management strategies, rules or data queries any more effective or efficient for health care fraud prevention than previous manual methods. See utility patent application Ser. No. 13/074,576 (Rudolph, et al.), filed Mar. 29, 2011 or utility patent application Ser. No. 13/617,085 (Jost, et al.), filed Sep. 14, 2012 for an in depth discussion of modeling weakness of parametric modeling techniques and traditional non-parametric approaches.
Further weaknesses for Prior Art is computer-implemented methods of analyzing results of a predictive model applied to a data pertaining to a plurality of entities. It references predictive modeling and report trees, but does not reference or expand on enhanced capabilities that specifically reference improved detection and prevention, improved efficiency and effectiveness, ease of investigation, ability to better manage staff or information transparency for the user, such as law enforcement, investigators, analysts and business experts. Additionally, Prior Art references sampling capabilities, but they are a simple browser-based method used to sample displayed data, versus the total population in an empirically derived and statistically valid method required for experimental design tests. Prior Art sampling techniques are biased and skewed based upon the displayed data. Lastly, report tree technology is not designed or utilized for creating a managed learning environment to optimize fraud or abuse prevention effectiveness, treatment effectiveness or maximize user business goals or return on investment—they are static trees with hard cut-offs—backed up by static reports.
Prior Art provides descriptions for summary report trees, report comparisons of activity of the entity to activity of the entity's peers with respect to: procedure code groups, diagnosis code groups, type of service codes or place of service codes, but it does not provide automated statistical comparison references or use of statistical measurements, for example Chi-Square measurements to determine differences decisively. Decisions are based upon anecdotal comparisons by viewing the predefined reports or running queries. Prior Art references comparisons of the activity of the entity in each of a plurality of demographics, such as age groups of the entities clients, to the activities of the entities peers in each group. The basic summary report, for peer-to-peer comparisons, compare one-month of activity for simple activity characteristics, including:
Prior Art also includes basic figures, graphs and inventory of predetermined reports:
Prior Art reporting capabilities are manual detection methods that further exaggerate already inefficient improper payment detection and resource management.
In addition to the limitations referenced in the Prior Art comparisons, several others are worthy of discussion:
See Appendix 1-3 (attached) for a more detailed description of the prior art.
The present invention is an Automated Healthcare Risk Management System for efficient and effective identification and resolution of healthcare fraud, abuse, over-servicing, over-utilization, waste and errors. It is a software application and interface that assists nurses, physicians, medical investigators, law enforcement or adjustors and risk management experts by focusing their prevention efforts on the highest risk and highest value providers, healthcare merchants, medical claims or beneficiaries (sometimes defined as patient) with improper cost dynamic components such as fraud, abuse, over-servicing, over-utilization, waste and error. It uses empirically derived, statistically valid, probabilistic scores to identify medical claim, provider, healthcare merchant and beneficiary related fraud and abuse as inputs to streamline identification and review of potential fraudulent or abusive transactions. Further it utilizes population risk adjusted, provider cost or waste index methodology to identify waste, over-servicing or over-utilization present results to nurses, physicians, medical investigators, law enforcement or adjustors and risk management experts for actions. Additionally, compliance profiling is utilized to identify and present claims that contain errors and should not be paid. The Automated Healthcare Risk Management System applies automated empirical decision strategies to manage risk for suspect claims or transactions, systematically conducts analysis and optimizes the effectiveness of alternative strategies, treatments, actions for investigators. It subsequently reports on the results and effectiveness of risk management operations and its resources to leadership.
More particularly, the Automated Healthcare Risk Management System utilizes real-time Predictive Models, a Provider Cost Index, Edit Analytics, Strategy Management, a Managed Learning Environment, Contact Management and Forensic GUI for targeting and individually identifying and preventing fraud, abuse, waste and errors prior to payment. Probabilistic scores are utilized to optimize return on investment, expected outcomes and resource management. The Automated Healthcare Risk Management System assists healthcare claims investigators and risk management experts by automated review of hundreds of millions of claims or transactions and then focusing their research, analysis, strategy, reporting and prevention efforts on only the highest risk and highest value claims for fraud, abuse, improper payments or over-servicing. Use of the Automated Healthcare Risk Management System does not require the education and experience of a statistician, programmer, or data or business analyst. It is designed for typical investigators in the healthcare industry, such as nurses, physicians, medical investigators or adjustors within the healthcare industry whose goal is to find timely resolution to complex fraud or abuse scenarios, not spending precious time to learn how to build queries perform analysis and search for suspect providers, healthcare merchants, beneficiaries or facilities, for example. The system can be connected to multiple large databases, which include, for example, national and regional medical and pharmacy claims data, as well as provider, healthcare merchant and beneficiary historical information, universal identification numbers, the Social Security Death Master File, Credit Bureau data such as credit risk scores and/or a plurality of other external data and demographics, Identity Verification Scores and/or Data, Change of Address Files for Providers or Healthcare Merchants, including “pay to” address, or Patients/Beneficiaries, Previous provider, healthcare merchant or beneficiary fraud “Negative” (suppression) files or tags (such as fraud, provider sanction, provider discipline or provider licensure, etc.), Eligible Beneficiary Patient Lists and Approved Provider or Healthcare Merchant Payment Lists. It automatically retrieves supporting views of the data in order to facilitate, enhance and implement multiple investigator decisions for claims, providers, healthcare merchants and beneficiaries with systematic recommendations, strategies, reports and management actions. More specifically, the data includes beneficiary history, provider, healthcare merchant and beneficiary interactions over time, provider actions and treatments, provider cohort comparisons, reports and alternative and adaptive strategies for managing potentially risky or costly claims or transactions associated with fraud, abuse, improper payments or over-servicing. The claims, transactions and other provider, healthcare merchant, beneficiary and facility information are prioritized from high fraud risk to low risk based upon:
Although very recent to healthcare, scoring models have helped alleviate some of the problems associated with the random or rules-based approach to the review of healthcare claims. See utility patent application Ser. No. 13/074,576 (Rudolph, et al.), filed Mar. 29, 2011 or utility patent application Ser. No. 13/617,085 (Jost, et al.), filed Sep. 14, 2012 for an in depth discussion of modeling weakness of parametric modeling techniques and traditional non-parametric approaches. However, Automated Risk Management infrastructure does not exist that makes it more efficient and effective for a nurse, physician, medical investigator or adjustor to identify and quickly resolve fraud, abuse or improper payments for providers, beneficiaries, claims or merchants. For example, some fraud models group the top 0.5%-1.0% of claims based upon an outlier score. Reviewers then sort the claims from highest risk to lowest risk manually within their workstation or within a spreadsheet that has been downloaded to a PC. Infrastructure is not considered for the historical review of procedures, claims or diagnosis codes across provider specialty groups, markets, segments or geographies. Prior Art references that most research is still completed using manual ad hoc pulls of data. More particularly, efficient resolution is not tied to the system containing the score and history. By combining the following capabilities within Automated Healthcare Risk Management System, efficiency and effectiveness can become even greater when assessing, identifying and investigating high risk claims, providers, healthcare merchants and beneficiaries. The major components of the Automated Healthcare Risk Management Include, but are not limited to:
Measurements, with Scheduled Dynamics Displayed
A plurality of attributes may be actively, versus passively, presented on the Automated Healthcare Risk Management System's Variable Inventory—including, but not limited to:
FIG. 1—High Level Block Diagram Showing Risk Management Process to Identify and Investigate Fraud, Abuse, Waste and Errors
FIG. 2—Shows Flow For Historical Data Summary Statistical Calculations
FIG. 3—Shows Flow For Predictive Model Score Calculation, Validation And Deployment Process
FIG. 4—Shows A Provider Claim Score Reason Summary Screen
FIG. 5—Shows Risk Adjusted Provider Cost Index Calculation and Deployment Process
FIG. 6—Shows Risk Adjusted Provider Cost Index Calculation and Deployment Process
FIG. 7—Shows Risk Adjusted Provider Cost Index Calculation and Deployment Process
FIG. 8—Presents A Provider Over-Servicing, Over-Utilization, Waste Mathematical, Graphical Example
FIG. 9—Presents A Provider Over-Servicing, Over-Utilization, Waste Mathematical, Risk Adjusted Drilldown Example
FIG. 10—Edit Analytics Assessment Process And Deployment Process
FIG. 11—Provider Edit Analytics Landing Page—NCCI and MUE Edit Example
FIG. 12—Fraud Prevention Risk Management Process
FIG. 13—Diagram Combining Analytical Technology With Managed Learning Environment
FIG. 14—Shows An Input Screen Example Of Application View Schematic—Strategy, Managed Learning Environment, Actions
FIG. 15—Strategy With Real Time Queuing Example
FIG. 16—Shows An Example Of A High Score Claims Queue
FIG. 17—Shows A Secure Login Screen Example
FIG. 18—Strategy Manager Hash Input Example
FIG. 19—Strategy Manager Data Input Tables And Input Fields Example
FIG. 20—Contact Management Flow And Deployment
FIG. 21—Provides An Example Of Capability Access And Selection Example
FIG. 22—Example Search Screen For Good And Bad Claims, Providers, Healthcare Merchants And Beneficiaries
FIG. 23—Presents And Example Of Research Screen Column Configuration
FIG. 24—High Score Claims Queue—Instant Profile
FIG. 25—Presents A Multi-Dimensional Mapping Example For Provider Segment
FIG. 26—Displays a Provider Address Verification Mapping Example
FIG. 27—Example Of Feedback Loop Dropdown Box, Notes Inputs And Navigation Tabs
FIG. 28—Shows Provider Claim Procedure Detail Screen
FIG. 29—Provider Sub-claim History Example
FIG. 30—Provides Investigator Provider Profiling Examples
FIG. 31—Shows A Provider Comparative Billing Analysis Screen Example
FIG. 32—Example Schematic For Strategy And Sub-Strategy Targeting
FIG. 33—An example of an Optimized Decision Strategy.
While this invention may be embodied in many different forms, there are described in detail herein specific preferred embodiments of the invention. This description is an exemplification of the principles of the invention and is not intended to limit the invention to the particular embodiments illustrated.
The present invention is an Automated Healthcare Risk Management System. The present invention utilizes Software as a Service design, Analytical Technology and a Risk Management design in order to optimally facilitate human interaction with and automated review of hundreds of millions of healthcare claims or transactions, or hundreds of thousands of providers or healthcare merchants to determine if the participants are high-risk for fraud and abusive practices or over servicing, wasteful or perpetrators of errors.
A plurality of external and internal data and predictive models can be made available for processing in the Scoring Engine, Decision Strategies, Strategy Manager, Managed Learning Environment and Forensic Investigation Graphical User Interface. Referring now to
From the Historical Data Summary Statistics Data Security Module 105 the data is sent to the Raw Data Preprocessing Module 106 where the individual claim data fields are then checked for valid and missing values and duplicate claim submissions. The data is then encrypted in the Historical Data Summary Statistics External Data Security Module 107 and configured into the format specified by the Application Programming Interface 108 and sent via secure transmission device to an External Data Vendors Data Security Module 109 for un-encryption. External Data Vendors Module 110 then append(s) additional data such as Unique Customer Pins/UID's (proprietary universal identification numbers), Social Security Death Master File, Credit Bureau scores and/or data and demographics, Identity Verification Scores and/or Data, Change of Address Files for Providers, including “pay to” address, or Patients/Beneficiaries, Previous provider or beneficiary fraud “Negative” (suppression) files or tags (such as fraud, provider sanction, provider discipline or provider licensure, etc.), Eligible Beneficiary Patient Lists and Approved Provider Payment Lists. The data is then encrypted in the External Data Vendor Data Security Module 109 and sent back via the Application Programming Interface in Module 108 and then to the Historical Data Summary Statistics External Data Security Module 107 to the Appended Data Processing Module 112. If the external database information determines that the provider or patient is deemed to be deceased at the time of the claim or to not be eligible for service or to not be eligible to be reimbursed for services provided or is not a valid identity, at the time of the original claim date, the claim is tagged as “invalid historical claim” and stored in the Invalid Historical Claim Database 111. These claims are suppressed for claim payments and not used in calculating the statistical values for the fraud and abuse predictive model score. They may be referred back to the original claim payer or processor and used in the future as an example of fraud. The valid claim data in the Appended Data Processing Module 112 is reviewed for valid or missing data and a preliminary statistical analysis is conducted summarizing the descriptive statistical characteristics of the data.
Referring back to
Another copy of claim data is sent from the Appended Data Processing Module 112 to the Claim Historical Summary Statistics Module 114 where the individual values of each claim are accumulated into claim score calculated variables by industry type, provider, patient, specialty and geography. Examples of individual claim variables include, for example, but are not limited to: fee amount submitted per claim, sum of all dollars submitted for reimbursement in a claim, number of procedures in a claim, number of modifiers in a claim, change over time for amount submitted per claim, number of claims submitted in the last 30/60/90/360 days, total dollar amount of claims submitted in the last 30/60/90/360 days, comparisons to 30/60/90/360 trends for amount per claim and sum of all dollars submitted in a claim, ratio of current values to historical periods compared to peer group, time between date of service and claim date, number of lines with a proper modifier, ratio of amount of effort required to treat the diagnosis compared to the amount billed on the claim.
Within the Claim Historical Summary Statistics Module 114, historical descriptive statistics are calculated for each variable for each claim by industry type, specialty and geography. Calculated historical summary descriptive statistics include measures such as the median and percentiles, including deciles, quartiles, quintiles or vigintiles. The historical summary descriptive statistics for each variable in the predictive score model are used in Standardization Module 212 in order to calculate normalized variables related to the individual variables for the predictive scoring models.
Another copy of the data is sent from the Appended Data Processing Module 112 to the Provider Historical Summary Statistics Module 115 where the individual values of each claim are accumulated into provider score variables by multiple dimensions, for example by industry type, provider, specialty and geography. Examples of individual claim variables include (but are not limited to): amount submitted per claim, sum of all dollars submitted for reimbursement in a claim, number of patients seen in 30/60/90/360 days, total dollars billed in 30/60/90/360 days, number of months since provider first started submitting claims, change over time for amount submitted per claim, comparisons to 30/60/90/360 trends for amount per claim and sum of all dollars submitted in a claim, ratio of current values to historical periods compared to peer group, time between date of service and claim date, number of lines with a proper modifier.
Within Provider Historical Summary Statistics Module 115, historical summary descriptive statistics are calculated for each variable for each Provider by industry type, specialty and geography. Calculated historical descriptive statistics include measures such as the median, range, minimum, maximum, and percentiles, including deciles, quartiles, quintiles and vigintiles for the Physician Specialty Group. The Provider Historical Summary Statistics Module 115 for all industry types, specialties and geographies are then used by the Standardization Module 212 to create normalized variables for the predictive scoring models.
Another copy of the data is sent from the Appended Data Processing Module 112 to the Patient Historical Summary Statistics Module 116. The historical summary descriptive statistics are calculated for the individual values of the claim and are accumulated for each beneficiary (patient) score variable by industry type, patient, provider, specialty and geography for all Patients who received a treatment (or supposedly received). An example of this type of aggregation would be all claims filed by a patient in Specialty Type “Orthopedics”, in the state of Georgia for number of office visits in last 12 months 12 (last 30, 60, 90 or 360 days for example) or median distance traveled to see the Provider. The Patient Historical Summary Statistics Module 116 for all industry types, specialties and geographies is then used by the Standardization Module 212 to create normalized variables for the predictive scoring models.
Referring now to
The data is then sent via a secure transmission device to the Predictive Score Model Deployment and Validation System Application Programming Interface Module 203 and then to the Data Security Module 204 within the scoring deployment system for un-encryption. Each individual claim data field is then checked for valid and missing values and is reviewed for duplicate submissions in the Data Preprocessing Module 205. Duplicate and invalid claims are sent to the Invalid Claim and Possible Fraud File 206 for further review or sent back to the claim payer for correction or deletion. The remaining claims are then sent to the Internal Data Security Module 207 and configured into the format specified by the External Application Programming Interface 208 and sent via secure transmission device to External Data Vendor Data Security Module 209 for un-encryption. Supplemental data is appended by External Data Vendors 210 such as Unique Customer Pins/UID's (proprietary universal identification numbers) Social Security Death Master File, Credit Bureau scores and/or data and demographics, Identity Verification Scores and/or Data, Change of Address Files for Providers or Patients/Beneficiaries Previous provider or beneficiary fraud “Negative” (suppression) files, Eligible Patient and Beneficiary Lists and Approved Provider Lists. The claim data is then sent to the External Data Vendors Data Security Module 209 for encryption and on to the External Application Programming Interface 208 for formatting and sent to the Internal Data Security Module 207 for un-encryption. The claims are then sent to the Appended Data Processing Module 211, which separates valid and invalid claims. If the external database information (or link analysis) reveals that the patient or provider is deemed to be inappropriate, such as deceased at the time of the claim or to not be eligible for service or not eligible to be reimbursed for services provided or to be a false identity, the claim is tagged as an inappropriate claim or possible fraud and sent to the Invalid Claim and Possible Fraud File 206 for further review and disposition.
One copy of the individual valid current claim or batch of claims is also sent from the Appended Data Processing Module 211 to Standardization Module 212 in order to create claim level variables for the predictive score models. In order to perform this calculation the Standardization Module 212 requires both the current claim or batch of claims from the Appended Data Processing Module 211 and a copy of each individual valid claim statistic sent from the Historical Procedure Code Diagnosis Code Master File Table in Module 113, Claim Historical Summary Statistics Module 114, Provider Historical Summary Statistics Module 115 and Patient Historical Summary Statistics Module 116.
The Standardization Module 212 converts raw data individual variable information into values required for use in the predictive score models. When using the raw data from the claim, plus the statistics about the claim data from the Historical Claim Summary Descriptive Statistics file modules, the Standardization Module 212 creates input variables for the predictive scoring models. The individual claim variables are matched to historical summary claim behavior patterns to calculate the current individual claim's historical behavior pattern. These individual and summary evaluations are transformations of each variable related to the individual claim.
In order to create normalized variables for the claim predictive score model, one copy of each summarized batch of claims is sent from the Claim Historical Summary Descriptive Statistics file in Module 114 to the Standardization Module 212. The Standardization Module 212 is a claim processing calculation where current, predictive score model summary normalized variables are created by matching the corresponding variable's information from Claim Historical Summary Descriptive Statistics file in Module 114 variable parameters to the current summary behavior pattern to calculate the current individual claim's historical behavior pattern, as compared to a peer group of claims in the current claim's specialty, geography. These individual and summary evaluations are normalized value transformations of each variable related to the individual claim or batch of claims. All of the score variables created in the Standardization Module 212, are then sent to Transformation Module, 213. The purpose of Transformation Module 213 is to transform the raw, normalized value of each variable in the fraud and abuse detection predictive score models into an estimate of the probability that this value likely fraud or abuse. While any supervised or unsupervised modeling approach will work within this agnostic scoring process, it is recommended that unsupervised non-parametric methodology be used to create the individual input variables and scores, due to the weaknesses of most parametric methods and traditional non-parametric methods. See utility patent application Ser. No. 13/074,576 (Rudolph, et al.), filed Mar. 29, 2011 or utility patent application Ser. No. 13/617,085 (Jost, et al.), filed Sep. 14, 2012 for an in depth discussion of modeling weakness of parametric modeling techniques and traditional non-parametric approaches.
In order to create Provider Level variables for the predictive score model, one copy of each summarized batch of claims per Provider is sent from the Historical Provider Summary Descriptive Statistics file in Module 115 to the Standardization Module 212. The Standardization Module 212 is a claim aggregation and processing calculation. Aggregation dimensions for the Provider may resemble the following design, but others may include claims-level, day-of-week and geography:
Current, predictive score model summary normalized variables are created by matching the corresponding variable's information from Historical Provider Summary Descriptive Statistics file in Module 115 variable parameters to the current summary behavior pattern to calculate the current individual provider's claims historical behavior pattern, as compared to a peer group of providers in the current claim provider's specialty and geography. These individual and summary evaluations are normalized value transformations of each variable related to the individual claim or batch of claims. All of the score variables created in the Standardization Module 212, are then sent to Transformation Module, 213. The purpose of Transformation Module, 213 is to transform the raw, normalized value of each variable in the fraud and abuse detection predictive score model into an estimate of the probability that this value likely fraud or abuse. While any supervised or unsupervised modeling approach will work within this agnostic scoring process, it is recommended that unsupervised non-parametric methodology be used to create the individual input variables and scores, due to the weaknesses of most parametric methods and traditional non-parametric methods. See utility patent application Ser. No. 13/074,576 (Rudolph, et al.), filed Mar. 29, 2011 or utility patent application Ser. No. 13/617,085 (Jost, et al.), filed Sep. 14, 2012 for an in depth discussion of modeling weakness of parametric modeling techniques and traditional non-parametric approaches.
In order to create Patient Level variables for the predictive score model, one copy of each summarized batch of claims per Patient is sent from the Historical Summary Patient Descriptive Statistics file in Module 116 to the Standardization Module 212. The Standardization Module 212 is a claim aggregation and processing calculation. Aggregation dimensions for the Patient may resemble the following design, but others may include claims-level, day-of-week and geography:
Current, patient claim summary normalized variables are created by matching the correspond variable's information from Historical Patient Summary Descriptive Statistics file in Module 116 variable parameters to the current claim summary behavior pattern to calculate the current individual patient batch of claim's historical behavior pattern, as compared to a peer group of provider's patients in the current claim provider's specialty and geography. These individual and summary evaluations are normalized value transformations of each variable related to the individual claim or batch of claims. All of the score variables created in the Standardization Module 212, are then sent to Transformation Module, 213. The purpose of Transformation Module 213 is to transform the raw, normalized value of each variable in the fraud and abuse detection predictive score model into an estimate of the probability that this value likely fraud or abuse. While any supervised or unsupervised modeling approach will work within this agnostic scoring process, it is recommended that unsupervised non-parametric methodology be used to create the individual input variables and scores, due to the weaknesses of most parametric methods and traditional non-parametric methods. See utility patent application Ser. No. 13/074,576 (Rudolph, et al.), filed Mar. 29, 2011 or utility patent application Ser. No. 13/617,085 (Jost, et al.), filed Sep. 14, 2012 for an in depth discussion of modeling weakness of parametric modeling techniques and traditional non-parametric approaches.
Each individual fraud and abuse scoring model value and the individual values corresponding to each predictor variable are then sent from the Module 213 to the Score Reason Generator Module 214 to calculate score reasons for why an observation score as it did. The Score Reason Generator Module 214 is used to explain the most important variables that cause the score to be highest for an individual observation. It selects the variable with the highest predictor value and lists that variable as the number 1 reason why the observation scored high. It then selects the variable with the next highest predictor value and lists that variable as the number 2 reason why the observation scored high, and so on.
One copy of the scored observations is sent from the Score Reason Generator Module 214 to the Score Performance Evaluation Module 215. In the Score Performance Module, the scored distributions and individual observations are examined to verify that the model performs as expected. Observations are ranked by score, and individual claims are examined to ensure that the reasons for scoring match the information on the claim, provider or patient. The Score Performance Evaluation Module details how to improve the performance of the fraud detection predictive score model given future experience with scored transactions and actual performance on those transactions with regard to fraud and not fraud. The data is then sent from the Score Performance Evaluation Module 215 to be stored in the Future Score Development Module 216. This module stores the data and the actual claim outcomes, whether it turned out to be a fraud or not a fraud. This information will be used in the future to build future fraud and abuse predictive models to enhance prevention and detection capabilities.
Another copy of the claim is sent from the Score Reason Generator Module 214 to the Data Security Module 217 for encryption. From the Data Security Module 217 the data is sent to the Application Programming Interface Module 218 to be formatted. From the Application Programming Interface Module 218 the data is sent to the Decision Management Module 219. Decision Management Module 219 provides Login Security and Risk Management, which includes Strategy Management, Experimental Design Test and Control, Queue, Contact and Treatment Management Optimization for efficiently interacting with constituents (providers and patients/beneficiaries. It also provides the an experimental design capability to test different treatments or actions randomly on populations within the healthcare value chain to assess the difference between fraud detection models, treatments or actions, as well as provide the ability to measure return on investment. The claims are organized in tables and displayed for review by fraud analysts on the Forensic Graphical User Interface (GUI) in Module 220. Using the GUI, the claim payer fraud analysts determine the appropriate actions to be taken to resolve the potential fraudulent or abusive request for payment. After the final action and when the claim is determined to be fraudulent or not fraudulent, a copy of the claim is sent to the Feedback Loop Module 221. The Feedback Loop Module 221 provides the actual outcome information on the final disposition of the claim, provider or patient as fraud or not fraud, back to the original raw data record. The actual outcome either reinforces the original fraud score probability estimate that the claim was fraud or not fraud or it countermands the original estimate and proves it to have been wrong. In either case, this information is used for future fraud and abuse predictive score model development to enhance future performance of Automated Healthcare Risk Management System. From the Feedback Loop Module 221 the data is stored in the Future Predictive Score Model Development Module 216 for use in future predictive score model developments using model development procedures, which may include supervised, if there is a known outcome for the dependent variable or there exists an appropriate unbiased sample size. Otherwise, part or all of the fraud detection models may be developed utilizing an unsupervised or supervised model development method.
“Risk adjustment is the process of adjusting payments to organizations, health insurance plans for example, based on differences in their risk characteristics (and subsequent health care costs) of people enrolled in each plan.”xxvi Current risk adjustment methodology relies on demographic, health history, and other factors to adjust payments to plans.xxvii These factors are identified in a base year, and used to adjust payments to plans in the following year. For example, CMS (Centers for Medicare and Medicaid Services), estimates payments based on a prospective payment system, estimating next year's health care expenditures as a function of beneficiary demographic, health, and other factors identifiable in the current year.xxviii
For this invention, the Risk Adjusted Provider Cost Index is derived from risk adjusted groupers using patent diagnosis-based co-morbidity. The Risk Adjusted Provider Cost Index is a score to target and take systematic action on provider waste, over-servicing or over-utilization in concert with the Automated Healthcare Risk Management System's Strategy Manager and Managed Learning Environment. Waste, over-servicing or over-utilization is defined as the administration of a different level of services than the industry-accepted norm for a given condition resulting in greater healthcare spending than had the industry norm been applied.
The risk adjustment process is well known in the healthcare industry and this invention is designed to utilize both internal proprietary or industry/commercial risk groupers, with patent gender, patent age, primary care specialty groups, geography, healthcare segment and fraud and abuse predictive model scores. CMS, for example, created risk adjusters called Hierarchy Category Codes (HCC's) to more accurately pay Medicare Advantage plans.xxix
The Provider Cost Index is created by calculating member month spend (expenditures) of a selected primary care provider, as compared to their cohort group. Member month spend, sometimes referred to as PMPM, is calculated by deriving the average of total healthcare costs for a single member (patient or beneficiary) in a month. PMPM is an indicator for healthcare expenditures that is analyzed by insurance companies to compare costs or premiums across different populations. A primary care physician is defined as the doctor who sees a patient first and provides basic treatment or decides that the patient should see another doctor. An example of a primary care physician specialty is Family Practice. The Provider Cost Index is calculated by dividing primary care member month spend by risk adjusted primary care member month spend. Primary Care specialists with indexes greater than 1.0 have a higher spend than their cohorts for patients with the same co-morbidity or health status. As described earlier, the Provider Cost Index is used within the Automated Healthcare Risk Management System to target providers who have waste, over servicing or over utilization. A high cost provider will be systematically educated to lower their cost, through letters, emails or phone calls. A high cost provider can also be eliminated from a payers (insurance companies) network in order to reduce the cost of the overall network. Spend can be defined in two scenarios: 1) identifying patient costs relating directly to an individual primary care physician's services or 2) calculating total cost for each patient—including other physicians, specialists, hospital and pharmacy spend for example. The same methodology is also transferable for scoring and identifying high cost specialists and healthcare facilities.
Referring now to
Following are simple examples on how a Risk Adjusted Provider Cost Index could be scored. For this example, we will use Diabetes/NDDM (Score 33) to calculate the index score. The calculation will take the form of a spreadsheet, but more sophisticated methods can utilize predictive modeling techniques.
The first step is segregating spend and member months for a single provider and his cohort group. The cohort does not include the individual provider in their aggregate sums or calculations in order to not skew results towards an over-performing or under-performing provider. In the example below—we have calculated a PMPM (monthly spend) for an individual primary care provider. We are isolating Females ages 40-64 for this analysis. For this segment, we sum patient spend and divide by total member months (where member months are counted as 1 for each member he sees in a single month). In this example, the PMPM (monthly spend) is calculated to be $147.
Now we perform the same methodology for all patients this provider has seen. In this example, the primary care physician only treats males and females in age group 40-64. The PMPM varies widely between the individual primary care provider and the cohort group. By breaking on Diabetes Mellitus/NIDDM (patient health), Age and Gender for this analysis, cost is normalized for the cohort group by predictors that may affect spend and outcomes.
Next we create the expected provider cost using normalized spend from the cohort group. The cohort group PMPM has been normalized for patient health, age and gender breaks. This estimate is then multiplied by the individual provider's member month to calculate the expected cost.
The final step is calculating the Provider Cost Index and the amount of waste, over-servicing or over-utilization. The index is calculated by dividing the individual primary care PMPM by the Cohort PMPM in the same Diabetes Mellitus/NIDDM group (patient health), Age and Gender group. In this case, the Provider Cost Index is $147 for the individual primary care provider and $79 for the associated cohort group. The index is $147/$79=186 for Females, ages 40-64. The analysis shows this provider is costing significantly more than his cohorts when normalized for health status (Score 33), age and gender. The expected overage is approximately $3.4 million in waste, over-servicing or over-utilization.
For this example, we made the assumption there was only one health category Diabetes Mellitus/NIDDM (Score=33). In reality, it is not uncommon to have 70 or more different categories. The same methodology applies as outlined above, but with many more cells to identify and target for cost savings. Specifically, the individual categories will each have Provider Cost Indexes assigned to them that can be targeted individually by the Automated Healthcare Risk Management System in real-time.
There is significant savings opportunity if this provider can be educated, reduce their costs and bring them more in line with their cohorts.
Healthcare edits are predefined decision logic or tables that screen claims prior to payment for compliance errors, medically unlikely services scenarios and for known claim payment scams. While the Edits are ineffective for optimally identifying fraud and abuse and fail to identify new and emerging risk trends, they do have a role in thwarting overpayments for healthcare.
CMS has created and published two types of edits, NCCI (National Correct Coding Initiative) and MUE (Medically Unlikely Edits), which together save billions of dollars per year. CMS implemented the National Correct Coding Initiative in 1996. This initiative was developed to promote correct coding of health care services by providers for Medicare beneficiaries and to prevent Medicare payment for improperly coded services. NCCI consists of automated edits provided to Medicare contractors to evaluate claim submissions when a provider bills more than one service for the same Medicare beneficiary on the same date of service. NCCI identifies pairs of services that under Medicare coding/payment policy a physician ordinarily should not bill for the same patient on the same day. Additionally, NCCI edits can be applied to the hospital outpatient prospective payment system (OPPS). NCCI edits can identify code pairs that CMS determined should not be billed together because one service inherently includes the other (bundled services). NCCI edits also identify code pairs that Medicare has determined, for clinical reasons, are unlikely to be performed on the same patient on the same day.xxx CMS developed Medically Unlikely Edits (MUE's) to reduce the paid claims error rate for Part B claims. An MUE for a HCPCS/CPT code is the maximum units of service that a provider would report under most circumstances for a single beneficiary on a single date of service.xxxi Both NCCI and MUE edits are available to the public domain for use.
Healthcare intermediaries, known as rules and edit organizations, have created business models marketing NCCI and MUE edits to Medicaid and Commercial Insurance companies. Some of these companies also hard-code a client's proprietary compliance or improper payment edits into their solution to identify incremental opportunities. Competition in the market place for these entities is based upon who has the lowest price. Typically organizations looking for rules and edit capabilities are responding to RFP's based upon who can offer the lowest price. Most rules and edit companies are now searching for methods for differentiation.
The Automated Healthcare Risk Management System has purposely incorporated predictive models and analytical technology to target the individual cost dynamics of fraud, abuse, waste, over-servicing, over-utilization and errors:
This invention has created Edit Analytics Capabilities within the Automated Healthcare Risk Management System.
The Strategy Manager Design allows:
The Strategy Manager can incorporate any score or data field into the decision strategy and take action. In this case Predictive Models for identifying fraud and abuse, the Provider Cost Index to identify waste, over-servicing and over-utilization, and finally Edit Analytics failures. Multiple levels can be queued real-time, including claim-level, provider-level, beneficiary-level or healthcare merchant-level. Strategies can be subset by industry or segment type.
Referring back to
Note that the yellow box with queue referenced in text is real time and can immediately create a queue for an investigator to work in real time with just a click of a mouse.
The login, shown in
Over the next several sections, component detail of the Strategy Manager and Workflow design will be discussed in detail.
Decision Strategies “fire” real-time when predefined thresholds or events occur. A real-time action, treatment or status is initiated (in any combination) when the Decision Strategy “fires”. Decision Strategies are empirically derived and utilized to efficiently and effectively evaluate claims, providers, healthcare merchants and beneficiaries for fraud and abuse. Targeted segmentation, utilizing internal or external predictive models and internal and external attributes, combined with optimized treatments in a Managed Learning Environment provide the ability to systematically and automatically evaluate hundreds of millions of claims in a short period of time and identify only the small amount that are potential fraud, abuse, over-servicing, over-utilization, waste or error cost dynamics associated with improper payments.
Strategy Inventory is a database and screen which contains a plurality of empirical strategy management information that will be organized in a table format, similar to the one below:
A treatment is optimized within an empirical Decision Strategy. Treatment examples for a provider, healthcare merchant or beneficiary include, but are not limited to, Calling, Emailing, Sending a Letter, Creating a Status for fraud or abuse or Refer to Third Party. Systematically, this provides an efficient and effective method to interact or communicate with a provider or beneficiary to educate and change potentially abusive or wasteful behavior. An action can be optimized within an empirical Decision Strategy by claims, providers, healthcare merchants or beneficiaries identified and presented to the queue—see
Treatment and Action Inventory is a database and screen that will contain a plurality of empirical strategy treatments and actions that will be organized in a table format, similar to the one below:
The Decision Strategy Creation capability is available to create new Optimized Decision Strategies:
The Risk Management design will have the following functionality:
A screen will be available to create new Treatments and Actions to utilize within the Optimized Decision Strategies:
An attribute in this context is any data element or variable, which can be utilized within any predictive model, empirical model or Decision Strategy. They can be numeric, dichotomous, categorical or continuous. They can also be an “alpha” characteristic containing any quantity and combination of numbers or letters. The Attribute Inventory Screen will be a working library that captures and documents a plurality of inputs available to create or modify Empirical Optimized Decision Strategies and Decision Strategies. A plurality of multidimensional predictive model scores and external and internal Attributes will be grouped into categories based upon their type. Attribute Categories include, but are not limited to:
The attribute inventory information may be organized in a table format similar to the one that follows and is displayed in a drop-down box for creation of decision strategies.
Functionality will exist to create new custom attributes using the attributes that exist within the Attribute Inventory. Requirements will include:
Attribute creation or refinement will use a plurality of transformations or functions, such as the following function examples:
The Automated Healthcare Risk Management System also provides an Experimental Design capability that provides investigators the ability to test different treatments or actions randomly on populations within the healthcare value chain to assess their difference between treatments (pay, decline or queue for example) or actions (Send A Letter, Call, Email, Output a File for example), as well as measure the incremental return on investment.
The Managed Learning Environment provides for segmenting populations for organizing test/control actions and treatments to measure and maximize return. In order to achieve results that maximize return on investment from capital dollars invested, measuring performance must be in place. However, this is not always the case in healthcare. Neither CMS nor members of the Senate can get an accurate gage on how programs are performing separately or collectively. An example of this issue was highlighted in a hearing on Jul. 12, 2011, where Senator Brown (R-MA) inquired whether $150 million in expenditures for program integrity systems had been good investments—when no outcome performance metrics had been established to measure their actual benefit.xxxii
The ability to tier investigator FTE (Full Time Equivalent) skill set, actions or treatments across different segments, score ranges or specialty groups and measure results is also key. Using a lower paid or lower skilled investigator FTE on easier cases and achieving the same results increases return on investment for the overall business. Shifting the more experienced investigator FTE to more complex cases provides a higher likelihood of success, than would have occurred with a lower skilled investigator. The only way to prove the incremental benefit from salary savings and increased investigation results is through a test and control design. For example:
The Managed Learning Environment also provides for real-time claim, procedure or provider counts within the Strategy Manager. The top of
Program Risk Management oversight is also a critical discipline to ensure claims, providers or beneficiaries correctly traverse models, strategies, actions, treatments and workflows correctly. A very important step to this process is to identify areas of risk. Areas of risk include adverse impacts to program or providers and model and strategy performance. Below are requirements for the development/implementation of new segmentation strategies and scoring models that drive strategy and workflow management.
Program Management, using the Managed Learning Environment, ensures there will never be more than the appropriate percent of a segment in a test mode for a market—30% for example. Sample size is set using random digits through the “Hash” function. Referring to
A claim or provider group will be considered truly a part of the test if and only if the action taken within the test differs from the action that would have been taken through the “champion” or control strategy. In other words, only ‘swap-in’ and ‘swap-out’ claims/providers count toward the maximum—30% in this example. The Managed Learning Environment Capabilities also address small sample issues. For example, smaller strategy segments covering a smaller portion of the portfolio may require a larger percentage in test mode to maintain a valid test size. Further sample size may also be needed if strategy node or segment level evaluations are needed for the strategy being tested.
A plurality of raw or derived internal and external attributes, captured or created during the pre processing step and the scoring step, as well as all Predictive Models Scores and Reasons, Provider Cost Index and Edit Analytics are available for testing and use within the Managed Learning Environment. The top of
Key population reporting and cost benefit analysis supports this solution, with the ability to measure ROI on experimental design. For example:
Contact Management is a component of the Managed Learning Environment. It works within the Workflow Management process to effectively, efficiently and optimally interact with Beneficiaries and Providers. Interactions can be payment interventions (denials) or messaging sent directly to Beneficiaries, Providers, Healthcare Merchants or Facilities through email, phone, electronic message or letter. The Strategy Manager actions are set up for Provider education or Beneficiary intervention. The capabilities provide for a soft-gloved messaging approach for a marginal Fraud and Abuse score, or phone call with a harder talk-off for a high fraud and abuse score (where a low score is low risk and a high score is high risk and likely fraud or abuse). Each contact has a cost and each outcome an expected return. The objective of the Contact Management component within the Managed Learning Environment is to test and converge towards the optimal outcome and return. In addition to the internal data, external data, external scores, Predictive Models, Provider Cost Index, Edit Analytics, Contact Management also utilizes the following data for targeting:
Contact Management is not a capability that stands alone, but an ability that resides inside of the Managed Learning Environment. Contact Management without the ability to test actions and measure results is a sub-optimal capability. See
Output from the Strategy Manager and Managed Learning Environment with the Automated Healthcare Risk Management System automatically presents the highest risk and most valuable claims, providers, healthcare merchants and beneficiaries to queues within the Forensic Graphical User Interface (GUI) for an investigator to work. Investigators are not “looking” for suspects, as the case would be in a BI Tool or a Data Mining Tool—they are investigating high likelihood cases that have failed risk management criteria within the Strategy Manager.
Specialized investigators are allowed to navigate to other screens in order to research fraud and abuse that is more complex:
There are occasions where viewing historical procedure or claims information isn't enough to make a decision. Additional analysis screens are included to guide an investigator to a final conclusion:
Note that neither the Top 10 Behavior Comparison Screen nor the Provider Comparative Billing Analysis Screen is for an investigator to look for fraud, abuse, waste, over-servicing, over-utilization or errors, it is for validating a decision or performing further research to appropriately classify a case. Remember that all of the suspect providers, healthcare merchants or beneficiaries under investigation originated by failing risk management criteria. These are not database mining or BI Tools looking for suspects—they are for providing critical information to resolve a case.
Reporting is upon demand within the Automated Healthcare Risk Management System.
This application claims priority to and provisional patent application 61/701,087, filed Sep. 14, 2012, the entire contents of which are hereby incorporated by reference. This application also incorporates the entire contents of each of the following patent applications: utility patent application Ser. No. 13/074,576, filed Mar. 29, 2011; provisional patent application 61/319,554, filed Mar. 31, 2010, provisional patent application 61/327,256, filed Apr. 23, 2010, utility patent application Ser. No. 13/617,085, filed Sep. 14, 2012, and provisional patent application 61/561,561, filed Nov. 18, 2011.
Number | Date | Country | |
---|---|---|---|
61701087 | Sep 2012 | US |