The present invention relates to the composition and use of clinical parameters for the prediction, or risk stratification for Systemic Inflammatory Response Syndrome (SIRS) several hours to days before SIRS symptoms are observable for a definitive diagnosis in a patient. The ability to predict the onset of SIRS, prior to the appearance of clinical symptoms, enables physicians to initiate therapy in an expeditious manner, thereby improving outcomes. This applies to patients that have non-infectious SIRS or patients with SIRS that progress to sepsis. The present invention is also directed to a method of determining parameters and combinations thereof, which are relevant for predicting onset of a disease, e.g., SIRS.
A biomarker is a measurable substance in an organism whose presence is indicative of some phenomenon such as disease, infection, or environmental exposure. For example, detection of a cancer-associated protein biomarker in the blood means the patient already has cancer. Pursuant to this invention, however, a combination of clinical features or parameters such as physiologic and/or clinical procedures (e.g., PO2 or Fingerstick Glucose) is used to predict how likely the patient will progress to SIRS. These features are noted as part of a patient’s health records, but are not previously associated with SIRS prior to this invention.
Prior published work related to the application of artificial intelligence and/or biomarker approaches to sepsis was designed mainly to improve the sensitivity and specificity of sepsis diagnosis at various stages of the progressive syndrome. Thus, the studies involved were conducted in patients, mainly in intensive care units, for whom a diagnosis of sepsis had already been made, based on widely accepted clinical criteria. In contrast, the invention predicts the onset of SIRS, prior to the appearance of clinical symptoms, which the invention has accomplished in intensive care patients with a sensitivity of 85-95%, an accuracy of 80-85%, and area under the curve (AUC) of 0.70-0.85. One of ordinary skill in the art would readily understand the meaning of the foregoing terms, which are standard in the machine learning literature and are well known to one of ordinary skill in the art. The present invention advantageously uses algorithms to analyze the types of available clinical and laboratory data that are normally collected in hospital patients to make its predictions, without requiring blood sampling and analysis for specific biomarkers.
SIRS, Systemic Inflammatory Response Syndrome, is a whole-body inflammatory state. A mild systemic inflammatory response to any bodily insult may normally have some salutatory effects. However, a marked or prolonged response, such as that associated with severe infections, is often deleterious and can result in widespread organ dysfunction. Many infectious agents are capable of inducing SIRS. These organisms either elaborate toxins or stimulate release of substances that trigger this response. Commonly recognized initiators are the lipopolysaccharides (LPSs, sometimes referred to as endotoxin), that are released by gram-negative bacteria. The resulting response involves a complex interaction between macrophages/monocytes, neutrophils, lymphocytes, platelets, and endothelial cells that can affect nearly every organ. Infectious SIRS can occur as a result of the following pathologic conditions: bacterial sepsis; burn and wound infections; candidiasis; cellulitis; cholecystitis; pneumonia; diabetic foot infection; infective endocarditis; influenza; intra-abdominal infections (e.g., diverticulitis, appendicitis); meningitis; colitis; pyelonephritis; septic arthritis; toxic shock syndrome; and urinary tract infections.
While SIRS can lead to sepsis, SIRS is not exclusively related to infection. Its etiology is broad and includes noninfectious conditions, surgical procedures, trauma, medications, and therapies. Some examples of conditions associated with non-infectious SIRS include: acute mesenteric ischemia; adrenal insufficiency; autoimmune disorders; burns; chemical aspiration; cirrhosis; dehydration; drug reaction; electrical injuries; hemorrhagic shock; hematologic malignancy; intestinal perforation; medication side effect; myocardial infarction; pancreatitis; seizure; substance abuse; surgical procedures; transfusion reactions; upper gastrointestinal bleeding; and vasculitis.
SIRS has been clinically defined as the simultaneous presence of two or more of the following features in adults: body temperature >38° C. (100.4° F.) or <36° C. (96.8° F.); heart rate of >90 beats per minute; respiratory rate of >20 breaths per minute or arterial carbon dioxide tension (PaCO2) of <32 mm Hg; and abnormal white blood cell count (>12,000/µL or < 4,000/µL or >10% immature [band] forms).
The complex pathophysiology of SIRS is independent of etiologic factors, with minor differences with respect to the cascades that it incites. This pathophysiology is briefly outlined as follows. Inflammation, the body’s response to nonspecific insults that arise from chemical, traumatic, or infectious stimuli is a critically important component. The inflammation itself is a process involving humoral and cellular responses, complement, and cytokine cascades. The relationship between these complex interactions and SIRS has been defined as a three-stage process. See Bone et al. (1992) (all citations refer to references listed at the end of the document).
In stage 1, following an insult cytokines are produced at the site. Local cytokine production incites an inflammatory response, thereby promoting wound repair and recruitment of the reticular endothelial (fixed macrophage) system. This process is essential for normal host defense homeostasis, and its malfunction is life-threatening. Local inflammation, such as in the skin and subcutaneous soft tissues, carries the classic description of rubor (redness), tumor (swelling), dolor (pain), calor (increased heat) and functio laesa (loss of function). Importantly, on a local level, this cytokine and chemokine release may cause local tissue destruction or cellular injury by attracting activated leukocytes to the region.
In stage 2, small quantities of local cytokines are released into the circulation, enhancing the local response. This leads to growth factor stimulation and the recruitment of macrophages and platelets. This acute phase response is typically well-controlled by a decrease in pro-inflammatory mediators and by the release of endogenous antagonists.
In stage 3, a significant systemic reaction occurs if the inflammatory stimuli continue to spread into the systemic circulation. The cytokine release leads to destruction rather than protection. A consequence of this is the activation of numerous humoral cascades, generalized activation of the reticular endothelial system, and subsequent loss of circulatory integrity. This leads to end-organ dysfunction.
When SIRS is mediated by an infectious insult, the inflammatory cascade is often initiated by endotoxin. Tissue macrophages, monocytes, mast cells, platelets, and endothelial cells are able to produce a multitude of cytokines. The cytokines tissue necrosis factor-alpha (TNF-α) and interleukin-1 (IL-1) are released first and initiate several downstream cascades.
The release of IL-1 and TNF-α (or the presence of endotoxin) leads to cleavage of the nuclear factor NF-kappa B (NF-κB) inhibitor. Once the inhibitor is removed, NF-κB initiates expression of mRNAs encoding genes regulating production of other pro-inflammatory cytokines, primarily IL-6, IL-8, and interferon gamma. TNF- α and IL-1 have been shown to be released in large quantities within 1 hour of an insult and have both local and systemic effects. TNF-α and IL-1 are responsible for fever and the release of stress hormones (norepinephrine, vasopressin, activation of the renin-angiotensin-aldosterone system).
Other cytokines, especially IL-6, stimulate the release of acute-phase reactants such as C-reactive protein (CRP) and procalcitonin. Notably, infection has been shown to induce a greater release of TNF-α, thus inducing a greater release of IL-6 and IL-8 than trauma does. This is suggested to be the reason higher fever is associated with infection rather than trauma.
The pro-inflammatory interleukins either function directly on tissue or via secondary mediators to activate the coagulation cascade and the complement cascade as well as the release of nitric oxide, platelet-activating factor, prostaglandins, and leukotrienes. HMGB1 (high mobility group box 1) is a protein present in the cytoplasm and nuclei in a majority of cell types. It acts as a potent pro-inflammatory cytokine and is involved in delayed endotoxin lethality and sepsis. In response to infection or injury, as is seen with SIRS, HMGB1 is secreted by innate immune cells and/or released passively by damaged cells. Thus, elevated serum and tissue levels of HMGB1 are induced by many of the agents that cause SIRS.
A correlation that exists between inflammation and coagulation is critical to the progression of SIRS. IL-1 and TNF-α directly affect endothelial surfaces, leading to the expression of tissue factor. Tissue factor initiates the production of thrombin, thereby promoting coagulation, and is a pro-inflammatory mediator itself. Fibrinolysis is impaired by IL-1 and TNF-α via production of plasminogen activator inhibitor-1. Pro-inflammatory cytokines also disrupt the naturally occurring anti-inflammatory mediators, anti-thrombin and activated protein-C (APC). If unchecked, this coagulation cascade leads to complications resulting from microvascular thrombosis, including organ dysfunction. The complement system also plays a role in the coagulation cascade. Infection-related pro-coagulant activity is generally more severe than that produced by trauma.
The cumulative effect of this inflammatory cascade is an unbalanced state, with inflammation and coagulation dominating. To counteract the acute inflammatory response, the body is equipped to reverse this process via the counter-inflammatory response syndrome (CARS). IL-4 and IL-10 are cytokines responsible for decreasing the production of TNF-α, IL-1, IL-6, and IL-8. The acute phase response also produces antagonists to TNF-α and IL-1 receptors. These antagonists either bind the cytokine, and thereby inactivate it, or block the receptors. The balance of SIRS and CARS helps to determine a patient’s outcome after an insult.
The normal physiology of an inflammatory response consists of an acute pro-inflammatory state resulting from innate immune system recognition of ligands, and an anti-inflammatory phase that can serve to modulate the pro-inflammatory phase. Under normal circumstances, these coordinated responses direct a return to homeostasis. Severe or protracted SIRS can result in septic shock. Bacteremia is usually present but may be absent. Increased nitric oxide levels may be responsible for vasodilation, and hypotension is also due to decreased circulating intravascular volume resulting from diffuse capillary leaks. Activation of platelets and the coagulation cascade can lead to the formation of fibrin- platelet aggregates, which further compromise tissue blood flow. The release of vasoactive substances, formation of microthrombi in the pulmonary circulation, or both together increase pulmonary vascular resistance, whereas systemic venodilation and transudation of fluid into tissues result in relative hypovolemia.
The true incidence of SIRS is unknown but probably much higher than documented, owing to the nonspecific nature of its definition. Not all patients with SIRS require hospitalization or have diseases that progress to serious illness. Because SIRS criteria are nonspecific and occur in patients who present with conditions ranging from influenza to cardiovascular collapse associated with severe pancreatitis, it is useful to stratify any incidence figures based on SIRS severity.
Results of epidemiologic studies conducted in the US have been published. A prospective survey of patients admitted to a tertiary care center revealed that 68% of hospital admissions to surveyed units met SIRS criteria. See Rangel-Fausto et al. (1995). The incidence of SIRS increased as the level of unit acuity increased. The following progression of patients with SIRS was noted: 26% developed sepsis, 18% developed severe sepsis, and 4% developed septic shock within 28 days of admission.
A hospital survey of SIRS revealed an overall in-hospital incidence of 542 episodes per 1000 hospital days. See Pittet et al. (1995). In comparison, the incidence in the intensive care unit (ICU) was 840 episodes per 1000 hospital days. Another study demonstrated that 62% of patients who presented to the emergency department with SIRS had a confirmed infection, while 38% did not. See Comstedt et al. (2009). Still, the incidence of severe SIRS associated with infection was found to be 3 cases per 1,000 population, or 2.26 cases per 100 hospital discharges. See Angus et al. (2001). The real incidence of SIRS, therefore, must be much higher and depends significantly on the rigor with which the definition is applied.
Prognosis depends on the etiologic source of SIRS, as well as on associated comorbidities. A study of SIRS in acutely hospitalized medical patients demonstrated a 6.9 times higher 28-day mortality in SIRS patients than in non-SIRS patients. Most deaths occurred in SIRS patients with an associated malignancy. See Comstedt et al. (2009). Mortality rates in the study of tertiary care patients mentioned above, see Rangel-Fausto et al. (1995), were 7% (SIRS), 16% (sepsis), 20% (severe sepsis), and 46% (septic shock). The median time interval from SIRS to sepsis was inversely related to the number of SIRS criteria met. Morbidity was related to the causes of SIRS, complications of organ failure, and the potential for prolonged hospitalization. A study evaluating mortality in patients with suspected infection in the emergency department showed the following in-hospital mortality rates: Suspected infection without SIRS, 2.1%; Sepsis, 1.3%; Severe Sepsis, 9.2%; and Septic Shock, 28%. See Shapiro et al. (2006).
Evaluation of the SIRS criteria in patients who underwent transcatheter aortic valve implantation (TAVI) revealed that SIRS appeared to be a strong predictor of mortality. See Sinning et al. (2012). The occurrence of SIRS was characterized by a significantly elevated release of IL-6 and IL-8, with subsequent increase in the leukocyte count, C-reactive protein (CRP), and pro-calcitonin. The occurrence of SIRS was related to 30-day and 1-year mortality (18% vs 1.1% and 52.5% vs 9.9%, respectively) and independently predicted 1-year mortality risk.
The early identification and administration of supportive care is key in the management of patients with SIRS who could progress to Sepsis, Severe Sepsis or Septic Shock. Several studies have shown that fluids and antibiotics, when administered early in the disease process, can prevent hypoxemia and hypotension. See Annane et al. (2005); Dellinger et al. (2008); Hollenberg et al. (2004); and Dellinger et al. (2013). Indeed, international guidelines on the management of sepsis recommend the initiation of resuscitative measures within 6 hours of the recognition of septic symptoms. See Dellinger et al. (2013).
The ability to predict the onset of SIRS, prior to the appearance of clinical symptoms would enable physicians to initiate therapy in an expeditious manner, thereby improving outcomes. This applies to patients that have non-infectious SIRS or patients with SIRS that progress to Sepsis.
SIRS is associated with a variety of inflammatory states, including sepsis, pancreatitis, burns, surgery, etc. When confronted with SIRS, physicians typically attempt to identify potential etiologies and interventions that can prevent adverse outcomes. For example, sepsis is a frequently encountered problem in intensive care unit (ICU) patients who have been instrumented with invasive catheters. Since SIRS precedes sepsis, and the development of sepsis is associated with significant morbidity and mortality, the presence of SIRS in the ICU cannot be ignored. SIRS in these patients often prompts a search for a focus of infection and potentially the administration of empiric antibiotics. Since minimizing the time to antibiotic administration is one intervention that has consistently been shown to improve outcomes in these patients, SIRS often serves as an alarm that causes health care workers to consider the use of antimicrobials in selected patients.
However, using the invention to predict the onset of SIRS 6 to 48 hours earlier (e.g., 6, 12, 24 or 48 hours earlier) would allow one to administer antibiotics earlier, with advantages either because the patients would not get as sick initially, before they get better, or because there is time to try one more antibiotic if the first one or two (or more) do not work. In patients with bacteremia, SIRS often portends the development of sepsis, severe sepsis and/or septic shock. It is important to recognize that in these patients SIRS is diagnosed after the patient has already been infected. Methods that identify patients who will eventually develop SIRS are desirable because they detect patients who are at an earlier stage in the infectious process. The key benefit of early and accurate SIRS prediction is the ability to identify patients who are at risk of infection before the infection has started to manifest itself. Since there are a great deal of data to suggest that the earlier supportive therapy is administered (e.g., fluid and antibiotics), the better the outcomes, a SIRS prediction prior to the onset of symptoms could significantly impact clinical management and outcomes. More precisely, the accurate prediction of SIRS 6 to 48 (e.g., 6, 12, 24 or 48) hours prior to the onset of symptoms would provide enough time to mobilize hospital resources, creating the best environment for the patient. For example:
Patients on inpatient floors who are identified as being at high risk of SIRS could be transferred to high acuity units that have a higher nurse-to-patient ratio, thereby helping to ensure that such patients are monitored in a manner that is commensurate with their risk;
A positive SIRS prediction in patients who are instrumented with invasive catheters (which on its own may increase one’s risk of bacteremia) would warrant closer monitoring for septic signs, and potentially a search for a septic focus. The threshold for the administration of fluids and empiric antibiotics in these patients would be significantly lower than patients who have not been identified as high risk; and
Patients who are identified as being at high risk for SIRS would benefit from a careful review of their medication history to ensure that they are not on agents that may be associated with a drug reaction (a cause of non-infectious SIRS). Careful review of medications in such patients provides one mechanism to circumvent adverse medication side effects.
The role of biomarkers in the diagnosis of sepsis and patient management has been evaluated. See Bernstein (2008). SIRS is an acute response to trauma, burn, or infectious injury characterized by fever, hemodynamic and respiratory changes, and metabolic changes, not all of which are consistently present. The SIRS reaction involves hormonally driven changes in liver glycogen reserves, triggering of lipolysis, lean body proteolysis, and reprioritization of hepatic protein synthesis with up-regulation of synthesis of acute phase proteins and down-regulation of albumin and important circulating transport proteins. Understanding of the processes has led to the identification of biomarkers for identification of sepsis and severe, moderate or early SIRS, which also can hasten treatment and recovery. The SIRS reaction unabated leads to a recurring cycle with hemodynamic collapse from septic shock, indistinguishable from cardiogenic shock, and death.
By focusing on early and accurate diagnosis of infection in patients suspected of SIRS, antibiotic overuse and its associated morbidity and mortality may be avoided, and therapeutic targets may be identified. The performance of diagnostic algorithms and biomarkers for sepsis in patients presenting with leukocytosis and other findings has been evaluated. Suspected patients are usually identified by WBC above 12,000/Ml, procalcitonin level, SIRS and other criteria, such as serum biomarkers of sepsis. In a study of 435 patients, see Gultepe et al. (2014), procalcitonin alone was a superior marker for sepsis. In patients with sepsis there was a marked increase in procalcitonin (p = 0.0001), and in patients requiring ICU admission, heart rate and blood pressure monitoring, and assisted ventilation were increased (p = 0.0001).
The emergence of large-scale data integration in electronic health records (EHR) presents unprecedented opportunities for design of methods to construct knowledge from heterogeneous datasets, and as an extension, to inform clinical decisions. However, current ability to efficiently extract informed decision support is limited due to the complexity of the clinical states and decision process, missing data and lack of analytical tools to advise based on statistical relationships. A machine learning basis for a clinical decision support system to identify patients at high risk for hyperlactatemia based upon routinely measured vital signs and laboratory studies has been developed. See Gultepe et al. (2014).
Electronic health records of 741 adult patients who met at least two systemic inflammatory response syndrome (SIRS) criteria were used to associate patients’ vital signs, white blood cell count (WBC), with sepsis occurrence and mortality. Generative and discriminative classification (naive Bayes, support vector machines, Gaussian mixture models, hidden Markov models) were used to integrate heterogeneous patient data and form a predictive tool for the inference of lactate level and mortality risk.
An accuracy of 0.99 and discriminability of 1.00 area under the receiver operating characteristic curve (AUC) for lactate level prediction was obtained when the vital signs and WBC measurements were analyzed in a 24 h time bin. An accuracy of 0.73 and discriminability of 0.73 AUC for mortality prediction in patients with sepsis was achieved with three properties: median of lactate levels, mean arterial pressure, and median absolute deviation of the respiratory rate. These findings introduce a new scheme for the prediction of lactate levels and mortality risk from patient vital signs and WBC. Accurate prediction of both these variables can drive the appropriate response by clinical staff. See Gultepe et al. (2014).
Sepsis is one of the oldest syndromes in medicine. It is the leading cause of death in non-coronary ICUs in the US, with associated mortality rates upwards of 80%. See Shapiro et al. (2006); Sinning et al. (2012); and Nierhaus et al. (2013). The term Sepsis refers to a clinical spectrum of complications, often starting with an initial infection. Untreated, the disease cascade progresses through stages with increasing mortality, from SIRS to Sepsis to Severe Sepsis to Septic Shock, and ultimately death. See Shapiro et al. (2006); Sinning et al. (2012); Nierhaus et al. (2013); and Lai et al. (2010). A representative course is illustrated in a prospective study that found 36% mortality in ICU patients with Sepsis, 52% in patients with Severe Sepsis and 82% in patients with Septic Shock. See Jekarl et al. (2013). While early goal-directed therapy has been shown to provide substantial benefits in patient outcomes, efficacy is contingent upon early detection or suspicion of the underlying septic etiology.
In 1992, an international consensus panel defined sepsis as a systemic inflammatory response to infection, noting that sepsis could arise in response to multiple infectious causes. The panel proposed the term “severe sepsis” to describe instances in which sepsis is complicated by acute organ dysfunction, and they codified “septic shock” as sepsis complicated by either hypotension that is refractory to fluid resuscitation or by hyperlactatemia. In 2003, a second consensus panel endorsed most of these concepts, with the caveat that signs of SIRS, such as tachycardia or an elevated white-cell count, occur in many infectious and noninfectious conditions and therefore are not helpful in distinguishing sepsis from other conditions. Thus, “severe sepsis” and “sepsis” are sometimes used interchangeably to describe the syndrome of infection complicated by acute organ dysfunction. See Angus et al. (2013).
These definitions have achieved widespread usage and become the gold standard in sepsis protocols and research. Yet sepsis clearly comprises a complex, dynamic, and relational distortion of human life. Given the profound scope of the loss of life worldwide, a need has been expressed to disengage from the simple concepts of the past and develop new approaches which engage sepsis in its true form, as a complex, dynamic, and relational pattern of death. See Lawrence A. Lynn (2014).
Several molecular markers have been discussed to facilitate diagnosis and treatment monitoring of sepsis in humans and several animal species. The most widely used ones may be CRP (C-reactive protein) and PCT (procalcitonin). Also various interleukins have been discussed as potential biomarkers of sepsis. However they are of limited use at present because of a lack of specificity. For example, Carrigan et al. (2004) reported that sensitivities and specificities for these markers in humans, in whom septic disease patterns have been extensively investigated, sensitivity and specificity of current markers can (even as mean values) be as low as 33% and 66%, respectively. Published data also have a high degree of inhomogeneity. Thus, there is a definite need for new diagnostic markers with improved diagnostic characteristics for the diagnosis of sepsis, especially early diagnosis. In systemic inflammation, i.e. in multiply traumatized patients, such a diagnosis is often very difficult because of other pathological processes interfering with the “normal” physiological values and parameters measured in standard intensive care medicine. Diagnosis of sepsis in patients with systemic inflammation, e. g. complications in polytraumatised patients, is a specific problem for which a high need exists in intensive care medicine.
Biomarkers for sepsis and resulting mortality can be detected by assaying blood samples. Changes in the concentration of the biomarkers can be used to indicate sepsis, risk of sepsis, progression of sepsis, remission from sepsis, and risk of mortality. Changes can be evaluated relative to datasets, natural or synthetic or semisynthetic control samples, or patient samples collected at different time points. Some biomarkers’ concentrations are elevated during disease and some are depressed. These are termed informative biomarkers. Some biomarkers are diagnostic in combination with others. Individual biomarkers may be weighted when used in combinations. Biomarkers can be assessed individually, isolated or in assays, in parallel assays, or in single-pot assays. See the ‘982 patent.
The early prediction or diagnosis of sepsis allows for clinical intervention before the disease rapidly progresses beyond initial stages to the more severe stages, such as severe sepsis or septic shock, which are associated with high mortality. Prediction or diagnosis has been accomplished, see the ‘573 patent, using a molecular diagnostics approach, involving comparing an individual’s profile of biomarker expression to profiles obtained from one or more control, or reference, populations, which may include a population who develops sepsis. Recognition of features in the individual’s biomarker profile that are characteristic of the onset of sepsis allows a clinician to diagnose the onset of sepsis from a bodily fluid isolated from the individual at a single point in time. The necessity of monitoring the patient over a period of time may be avoided, allowing clinical intervention before the onset of serious symptoms. Further, because the biomarker expression is assayed for its profile, identification of the particular biomarkers is unnecessary. The comparison of an individual’s biomarker profile to biomarker profiles of appropriate reference populations likewise can be used to diagnose SIRS in the individual. See the ‘573 patent.
Additional biomarkers for the diagnosis of sepsis include detection of inducible nitric oxide (NO) synthase (the enzyme responsible for overproduction of NO in inflammation), detection of endotoxin neutralization, and patterns of blood proteins. A panel of blood biomarkers for assessing a sepsis condition utilizes an iNOS indicator in combination with one or more indicators of patient predisposition to becoming septic, the existence of organ damage, or the worsening or recovering from a sepsis episode. See the ‘968 publication. Endotoxin neutralization as a biomarker for sepsis has been demonstrated, see the ‘530 publication, using methods specifically developed for detecting the neutralization in a human subject. This system has also provided methods for determining the effectiveness of a therapeutic agent for treating sepsis. See the ‘530 publication. Application of modern approaches of global proteomic has been used for the identification and detection of biological fluid biomarkers of neonatal sepsis. See the ‘652 publication. Methods using expression levels of the biomarkers Triggering Receptor Expressed on Myeloid cells-1 (TREM 1) and TREM-like receptor transcript-1 (TLT1) as an indication of the condition of the patient, alone or in combination with further sepsis markers have been used for the diagnosis, prognosis and prediction of sepsis in a subject. See the ‘370 patent. When levels of the biomarkers indicate the presence of sepsis, treatment of the patient with an antibiotic and/or fluid resuscitation treatment is indicated. See the ‘370 patent.
A multibiomarker-based outcome risk stratification model has been developed for adult septic shock. See the ‘869 publication. The approach employs methods for identifying, validating, and measuring clinically relevant, quantifiable biomarkers of diagnostic and therapeutic responses for blood, vascular, cardiac, and respiratory tract dysfunction, particularly as those responses relate to septic shock in adult patients. The model consists of identifying one or more biomarkers associated with septic shock in adult patients, obtaining a sample from an adult patient having at least one indication of septic shock, then quantifying from the sample an amount of one or more biomarkers, wherein the level of the biomarker(s) correlates with a predicted outcome. See the ‘869 publication.
The biomarker approach has also been used for prognostic purposes, by quantifying levels of metabolite(s) that predict severity of sepsis. See the ‘969 publication. The method involves measuring the age, mean arterial pressure, hematocrit, patient temperature, and the concentration of one or more metabolites that are predictive of sepsis severity. Analysis of a blood sample from a patient with sepsis establishes the concentration of the metabolite, after which the severity of sepsis infection can be determined by analyzing the measured values in a weighted logistic regression equation. See the ‘969 publication.
A method based on determination of blood levels of antitrypsin (ATT) or fragments thereof, and transthyretin (TTR) or fragments thereof has been described for the diagnosis, prediction or risk stratifcation for mortality and/or disease outcome of a subject that has or is suspected to have sepsis. See the ‘631 publication. Presence and/or level of ATT or its fragments is correlated with increased risk of mortality and/or poor disease outcome if the level of ATT is below a certain cut-off value and/or the level of fragments thereof is above a certain cut-off value. Similarly, increased risk of mortality and/or poor disease outcome exist if the level of TTR is below a certain cut-off value and/or the level of its fragments is also below a certain cut-off value. See the ‘631 publication.
The amount of data acquired electronically from patients undergoing intensive care has grown significantly during the past decade. Before it becomes knowledge for diagnostic and/or therapeutic purposes, bedside data must be extracted and organized to become information, and then an expert can interpret this information. Artificial intelligence applications in the intensive care unit represent an important use of such technologies. See Hanson et al. (2001). The use of computers to extract information from data and enhance analysis by the human clinical expert is a largely unrealized role for artificial intelligence. However, a variety of novel, computer-based analytic techniques have been developed recently. Although some of the earliest artificial intelligence applications were medically oriented, AI has not been widely accepted in medicine. Despite this, patient demographic, clinical, and billing data are increasingly available in an electronic format and therefore susceptible to analysis by intelligent software. The intensive care environment is therefore particularly suited to the implementation of AI tools because of the wealth of available data and the inherent opportunities for increased efficiency in inpatient care. A variety of new AI tools have become available in recent years that can function as intelligent assistants to clinicians, constantly monitoring electronic data streams for important trends, or adjusting the settings of bedside devices. The integration of these tools into the intensive care unit can be expected to reduce costs and improve patient outcomes. See Hanson et al. (2001).
Extensive efforts are being devoted to adding intelligence to medical devices, with various degrees of success. See Begley et al. (2000). Numerous technologies are currently used to create expert systems. Examples include: rule-based systems; statistical probability systems, Bayesian belief networks; neural networks; data mining; intelligent agents, multiple-agent systems; genetic algorithms; and fuzzy logic. Examples of specific uses include: pregnancy and child-care health information; pattern recognition in epidemiology, radiology, cancer diagnosis and myocardial infarction; discovery of patterns in treatments and outcomes in studies on epidemiology, toxicology and diagnosis; searches for and retrieval of relevant information from the internet or other knowledge repositories; and procedures that mimic evolution and natural selection to solve a problem.
In the modern healthcare system, rapidly expanding costs/complexity, the growing myriad of treatment options, and exploding information streams that often do not effectively reach the front lines hinder the ability to choose optimal treatment decisions over time. A general purpose (non-disease-specific) computational/artificial intelligence (AI) framework to address these challenges has been developed. See Bennett et al. (2013). This framework serves two potential functions, viz., a simulation environment for exploring various healthcare policies, payment methodologies, and providing the basis for clinical artificial intelligence. The approach combines Markov decision processes and dynamic decision networks to learn from clinical data and develop complex plans via simulation of alternative sequential decision paths while capturing the sometimes conflicting, sometimes synergistic interactions of various components in the healthcare system. It can operate in partially observable environments (in the case of missing observations or data) by maintaining belief states about patient health status and functions as an online agent that plans and re-plans as actions are performed and new observations are obtained.
Bennett and Hauser evaluated the framework using real patient data from an electronic health record, optimizing “clinical utility” in terms of cost-effectiveness of treatment (utilizing both outcomes and costs) and reflecting realistic clinical decision-making. The results of computational approaches were compared to existing treatment-as-usual (TAU) approaches, and the results demonstrate the feasibility of this approach. The AI framework easily outperformed the current TAU case-rate/fee-for-service models of healthcare. Using Markov decision processes, for instance, the cost per unit of outcome change (CPUC) was $189 vs. $497 for TAU (where lower CPUC is considered optimal) - while at the same time the AI approach could obtain a 30-35% increase in patient outcomes. According to Bennett and Hauser, modifying certain AI model parameters could further enhance this advantage, obtaining approximately 50% more improvement (outcome change) for roughly half the costs. Thus, given careful design and problem formulation, an AI simulation framework can approximate optimal decisions even in complex and uncertain environments.
Development and assessment of a data-driven method that infers the probability distribution of the current state of patients with sepsis, likely trajectories, optimal actions related to antibiotic administration, prediction of mortality and length-of-stay have been conducted. See Tsoukalas et al. (2015). A data-driven, probabilistic framework for clinical decision support in sepsis-related cases was constructed, first defining states, actions, observations and rewards based on clinical practice, expert knowledge and data representations in an EHR dataset of 1492 patients. Partially Observable Markov Decision Process (POMDP) model was used to derive the optimal policy based on individual patient trajectories and the performance of the model-derived policies was evaluated in a separate test set. Policy decisions were focused on the type of antibiotic combinations to administer. Multi-class and discriminative classifiers were used to predict mortality and length of stay. Data-derived antibiotic administration policies led to a favorable patient outcome in 49% of the cases, versus 37% when the alternative policies were followed (P=1.3e-13).
Sensitivity analysis on the model parameters and missing data argued for a highly robust decision support tool that withstands parameter variation and data uncertainty. When the optimal policy was followed, 387 patients (25.9%) had 90% of their transitions to better states and 503 patients (33.7%) patients had 90% of their transitions to worse states (P=4.0e-06), while in the non-policy cases, these numbers are 192 (12.9%) and 764 (51.2%) patients (P=4.6e-117), respectively. Furthermore, the percentage of transitions within a trajectory that led to a better or better/same state were significantly higher by following the policy than for non-policy cases (605 vs 344 patients, P=8.6e-25). Mortality was predicted with an AUC of 0.7 and 0.82 accuracy in the general case and similar performance was obtained for the inference of the length-of-stay (AUC of 0.69 to 0.73 with accuracies from 0.69 to 0.82). Thus, a data-driven model was able to suggest favorable actions, predict mortality and length of stay as above. See Tsoukalas et al. (2015).
For sepsis monitoring and control, a computer-implemented alerting method has been developed. See the ‘449 patent. The method involves automatically extracting with a computer system, from records maintained for a patient under care in a healthcare facility, information from an electronic medical record, and obtaining with the computer system information about real-time status of the patient. The method also involves using the information from the electronic medical record and the information about the real-time status to determine whether the patient is likely to be suffering from dangerous probability of sepsis, using information from the electronic medical record to determine whether treatment for sepsis is already being provided to the patient, and electronically alerting a caregiver over a network if it is determined that a potentially dangerous level of sepsis exists and that treatment for sepsis is not already being provided. See the ‘449 patent.
The complexity of contemporary medical practice has impelled the development of different decision-support aids based on artificial intelligence and neural networks. Distributed associative memories are neural network models that fit well with the concept of cognition emerging from current neurosciences. A context-dependent autoassociative memory model has been reported, see Pomi et al. (2006), in which sets of diseases and symptoms are mapped onto bases of orthogonal vectors. A matrix memory stores associations between the signs and symptoms, and their corresponding diseases. In an implementation of the application with real data, a memory was trained with published data of neonates with suspected late-onset sepsis in a neonatal intensive care unit. A set of personal clinical observations was used as a test set to evaluate the capacity of the model to discriminate between septic and non-septic neonates on the basis of clinical and laboratory findings.
Results showed that matrix memory models with associations modulated by context could perform automated medical diagnoses. The sequential availability of new information over time makes the system progress in a narrowing process that reduces the range of diagnostic possibilities. At each step the system provides a probabilistic map of the different possible diagnoses to that moment. The system can incorporate the clinical experience, building in that way a representative database of historical data that captures geo-demographical differences between patient populations. The trained model succeeded in diagnosing late-onset sepsis within the test set of infants in the NICU: sensitivity 100%; specificity 80%; percentage of true positives 91%; percentage of true negatives 100%; accuracy (true positives plus true negatives over the totality of patients) 93.3%; and Cohen’s kappa index 0.84.
An electronic sepsis surveillance system (ESSV) was developed to identify severe sepsis and determine its time of onset. ESSV sensitivity and specificity were evaluated during an 11-day prospective pilot study and a 30-day retrospective trial. See Brandt et al. (2015), ESSV diagnostic alerts were compared with care team diagnoses and with administrative records, using expert adjudication as the standard for comparison. ESSV was 100% sensitive for detecting severe sepsis but only 62.0% specific. During the pilot study, the software identified 477 patients, compared with 18 by adjudication. In the 30-day trial, adjudication identified 164 severe sepsis patients, whereas ESSV detected 996. ESSV was more sensitive but less specific than care team or administrative data. ESSV-identified time of severe sepsis onset was a median of 0 hours later than by adjudication (interquartile range = 0.05).
A retrospective, data-driven analysis, based on neural networks and rule-based systems has been applied to the data of two clinical studies of septic shock diagnosis. See Brause et al. (2001). The approach included steps of data mining, i.e., building up a database, cleaning and preprocessing the data and finally choosing an adequate analysis for the patient data. Two architectures based on supervised neural networks were chosen. Patient data was classified into two classes (survived and deceased) by a diagnosis based either on the black-box approach of a growing RBF network, and otherwise on a second network which could be used to explain its diagnosis by human understandable diagnostic rules. Advantages and drawbacks of these classification methods for an early warning system were identified.
It has been recommended that mortality risk stratification or severity-of-illness scoring systems be utilized in clinical trials and in practice to improve the precision of evaluation of new therapies for the treatment of sepsis, to monitor their utilization and to refine their indications. See Barriere et al. (1995). With the increasing influence of managed care on healthcare delivery, there will be increased demand for techniques to stratify patients for cost-effective allocation of care. Severity of illness scoring systems are widely utilized for patient stratification in the management of cancer and heart disease.
Mortality risk prediction in sepsis has evolved from identification of risk factors and simple counts of failing organs, to techniques that mathematically transform a raw score, comprised of physiologic and/or clinical data, into a predicted risk of death. Most of the developed systems are based on global ICU populations rather than upon sepsis patient databases. A few systems are derived from such databases. Mortality prediction has also been carried out from assessments of plasma concentrations of endotoxin or cytokine (IL-1, IL-6, TNF-α). While increased levels of these substances have been correlated with increased mortality, difficulties with bioassay and their sporadic appearance in the bloodstream prevent these measurements from being practically applied. The calibration of risk prediction methods comparing predicted with actual mortality across the breadth of risk for a population of patients is excellent, but overall accuracy in individual patient predictions is such that clinical judgment must remain a major part of decision-making. With databases of appropriate patient information increasing in size and complexity, clinical decision making requires the innovation of a reliable scoring system. See Angus et al. (2013).
Dynamic Bayesian Networks, a temporal probabilistic technique to model a system whose state changes over time, was used to detect the presence of sepsis soon after the patient visits the emergency department. See Nachimuthu et al. (2012). A model was built, trained and tested using data of 3,100 patients admitted to the emergency department, and the accuracy of detecting sepsis using data collected within the first 3 hours, 6 hours, 12 hours and 24 hours after admission was determined. The area under the curve was 0.911, 0.915, 0.937 and 0.944 respectively.
Application of new knowledge based methods to a septic shock patient database has been proposed, and an approach has been developed that uses wrapper methods (bottom-up tree search or ant feature selection) to reduce the number of properties. See Fialho et al. (2012). The goal was to estimate, as accurately as possible, the outcome (survived or deceased) of septic shock patients. A wrapper feature selection based on soft computing methods was applied to a publicly available ICU database. Fuzzy and neural models were derived and features were selected using a tree search method and ant feature selection.
An attempt has been made to support medical decision making using machine learning for early detection of late-onset neonatal sepsis from off-the-shelf medical data and electronic medical records (EMR). See Mani et al. (2014). Data used were from 299 infants admitted to the neonatal intensive care unit and evaluated for late-onset sepsis. Gold standard diagnostic labels (sepsis negative, culture positive sepsis, culture negative/clinical sepsis) were assigned based on all the laboratory, clinical and microbiology data available in EMR. Only data that were available up to 12 h after phlebotomy for blood culture testing were used to build predictive models using machine learning (ML) algorithms. Sensitivity, specificity, positive predictive value and negative predictive value of sepsis treatment of physicians were compared with predictions of models generated by ML algorithms.
Treatment sensitivity of all the nine ML algorithms and specificity of eight out of the nine ML algorithms tested exceeded that of the physician when culture-negative sepsis was included. When culture negative sepsis was excluded both sensitivity and specificity exceeded that of the physician for all the ML algorithms. The top three predictive variables were the hematocrit or packed cell volume, chorioamnionitis and respiratory rate. See Rangel-Fausto et al. (1995); and Mani et al. (2014).
The importance of preprocessing in clinical databases has been recognized. Specifically in intensive care units, data is often irregularly recorded, contain a large amount of missing values and sampling times are uneven. A systematic preprocessing procedure has been proposed, see Marques et al. (2011), that can be generalized to common clinical databases. This procedure was applied to a known septic shock patient database and classification results were compared with previous studies. The goal was to estimate, as accurately as possible, the outcome (survived or deceased) of these septic shock patients. Neural modeling was used for classification, and results showed that preprocessing improved classifier accuracy. See Marques et al. (2011).
The present invention relates to the composition and use of clinical parameters (or features) for the prediction or risk stratification for Systemic Inflammatory Response Syndrome (SIRS) several hours to days before SIRS symptoms are observable for a definitive diagnosis in a patient, and relates to the development of groups of parameters and corresponding prediction models for predicting onset of a disease, e.g., as SIRS. The ability to predict the onset of SIRS, prior to the appearance of clinical symptoms, enables physicians to initiate therapy in an expeditious manner, thereby improving outcomes. This applies to patients that have non-infectious SIRS or patients with SIRS that progress to sepsis. The ability to predict a disease, e.g., SIRS, is useful for healthcare professionals to provide early prophylactic treatment for hospitalized patients, who will otherwise develop sepsis and/or conditions — such as pancreatitis, trauma, or burns — that share symptoms identical or similar to, for example, SIRS.
Moreover, such a predictive ability can also be applied to enhance patient care during clinical trials. A clinical trial is a prospective biomedical or behavioral research studies on human subjects that is designed to answer specific questions about biomedical or behavioral interventions (novel vaccines, drugs, treatments, devices or new ways of using known interventions), generating safety and efficacy data. The patients can include patients who develop SIRS or SIRS-like symptoms when they are enrolled in clinical trials investigating a variety of pre-existing conditions. For example, a medical device company could be conducting a trial for an implantable device such as hip replacement system, or a pharmaceutical company could be conducting a trial for a new immunosuppressant for organ recipients. In both scenarios, the clinical trial protocol would concentrate on functional and recovery measurements. If trial investigators had access to a method that predicted which patients were infected during the operation, or at any time during the trial, they would be able to provide early treatment, and minimize adverse events and patient dropout. Correspondingly the same method can also be used to screen patients during the initial phase of patient enrollment: a potential enrollee predicted to develop SIRS could first be treated or excluded from the trial, thereby reducing adverse or confounding results during the trial.
The invention is based on combinatorial extraction and iterative prioritization of clinical parameters and measurements (or, collectively, “features”) commonly available in healthcare settings in the form of common patient measurements, laboratory tests, medications taken, fluids and solids entering and leaving the patient by specified routes, to correlate their presence and temporal fluctuations to whether a patient would ultimately develop SIRS. This group of clinical parameter combinations has not been previously associated with SIRS or related to its progression and risk stratification. The invention relates, in general, to the identification and prioritization of these clinical parameters and measurements, or combinations thereof, for the prediction (or predictive modeling) of SIRS. As shown in the below timeline, the invention enables the prediction of SIRS well prior to a prediction time (and/or a time of diagnosis) enabled by existing technologies.
This invention describes the identification of seemingly unrelated physiologic features and clinical procedures, combinations of which can be used to predict accurately the likelihood of a SIRS-negative patient becoming diagnosed as SIRS-positive 6 to 48 hours (e.g., 6, 12, 24 or 48 hours) later.
The MIMIC II database contains a variety of hospital data for four intensive care units (ICUs) from a single hospital, the Beth Israel Deaconess Medical Center (BIDMC) in Boston. MIMIC itself stands for “Multiparameter Intelligent Monitoring in Intensive Care,” and this second version is an improvement on the original installment. The hospital data tabulated is time-stamped and contains physiological signals and measurements, vital signs, and a comprehensive set of clinical data representing such quantitative data as medications taken (amounts, times, and routes); laboratory tests, measurements, and outcomes; feeding and ventilation regimens, diagnostic assessments, and billing codes representing services received. MIMIC II contains information for over 33,000 patients collected between 2001 and 2008 from the medical ICU (MICU), surgical ICU (SICU), coronary care unit (CCU) and cardiac surgery recovery unit (CSRU), as well as the neonatal ICU (NICU). Operationally MIMIC II is organized as a relational PostgreSQL database that can be queried using the SQL language, for convenience and flexibility. The database is organized according to individual patients, each denoted by a unique integer identification number. A particular patient may have experienced multiple hospital admissions and multiple ICU stays for each admission, which are all accounted for in the database. To comply with the Health Insurance Portability and Accountability Act (HIPAA), the individuals in the database were de-identified by removing protected health information (PHI). Moreover, the entire time course for each patient (e.g., birthday, all hospital admissions, and ICU stays) was time-shifted to a hypothetical period in the future, to further reduce the possibility of patient re-identification.
Although the MIMIC II database was used as a source of measurements and other data for the invention, the invention disclosed here is not limited by the MIMIC II database or the specific measurements, representations, scales, or units from the BIDMC or the MIMIC II database. For example, the units that are used to measure a feature for use in the invention may vary according to the lab or location where the measurement occurs. The standard dose of medication or route of administration may vary between hospitals or hospital systems, or even the particular member of a class of similar medications that are prescribed for a given condition may vary. Mapping of the specific features found in the MIMIC II database to those used in another hospital system are incorporated into the invention disclosed here to make use of this invention in a different hospital. For example, if the MIMIC II database measures the weight of patients in pounds and another hospital does so in kilograms, one of ordinary skill in the art would appreciate that it is a simple matter to convert the patients’ weights from kilograms to pounds. Likewise, it is straightforward to adjust the predictive formula of the invention to accept kilograms instead of pounds. This sort of mapping between features also can be done between medications that carry out the same functions, but may differ in standard dosages, and/or alternative laboratory measurements that measure the same parameter, vital sign or other aspect in a patient, etc.
In addition, rather than mapping feature-to-feature as described in the above paragraph and then using the exemplary models presented here with the newly mapped features, it is straightforward to use the methods of the invention taught here to take existing hospital datasets and retrain models in accordance with the techniques of the invention described herein. Those models can then be used predictively, in the manner of the invention shown here. The same feature removal and feature selection methods can be used, or the features found useful here can guide hand-curated feature selection methods. All of this would be apparent to one of ordinary skill in the art.
The MIMIC II Database is available online at the following site [https://physionet.org/mimic2/], and is incorporated herein by reference in its entirety. As a person of ordinary skill in the art would appreciate, the MIMIC II database can be readily and easily accessed as follows. Information at the website https://physionet.org/mimic2/mimic2_access.shtml describes how to access the MIMIC II clinical database. First one needs to create a PhysioNetWorks account at https://physionet.org/pnw/login. One then follows the directions at https://physionet.org/works/MIMICIIClinicalDatabase/access.shtml, which includes completing a training program in protecting human research participants (which can be accomplished online) because of research rules governing human subjects data. Finally, one applies for access to the database by filling out an application, including certification from the human subjects training program and a signed data use agreement. These are common steps familiar to one of ordinary skill in the art when dealing with such medical data on human subjects and one of ordinary skill in the art would expect such steps to be taken. Approved applicants, such as a person of ordinary skill in the art, receive instructions by email for accessing the database. When updated (including the recent release of the MIMIC III database), the updated features can be used as described herein for prediction of SIRS, and are within the scope of the invention.
Data were obtained from the MIMIC II Database from the tables representing chart measurements, laboratory measurements, drugs, fluids, microbiology, and cumulative fluids for patients. See Saeed et al. (2011). The following tables were used to extract patient data used according to the invention for prediction:
The chart events table contains charted data for all patients. We recorded the patient id, the item id, the time stamp, and numerical values.
The lab events table contains laboratory data for all patients. We recorded the patient id, the item id, the time stamp, and numerical values.
The io events table contains input and output (fluid transfer) events for all patients. We recorded the patient id, the item id, the time stamp, and numerical value (generally of the fluid volume).
The micro events table contains microbiology data for all patients. We recorded the patient id, the item id, the time stamp, and the result interpretation. The result interpretation that we gather is based on 2 categories ‘R’ (resistant) and ‘S’ (sensitive) that are mapped to 1 and -1 values, respectively.
The med events table contains medication data for all patients. We recorded the patient id, the item id, the time stamp, and the medication dose.
The total balance (totalbal) events table contains the total balance of input and output events. We recorded the patient id, the item id, the time stamp, and the cumulative io volume.
As a person of ordinary skill in the art would appreciate, the above entries, those in Tables 1 to 7 herein, and those in the MIMIC II database correspond to features (as shown in the MIMIC II database and below) identified by well-known abbreviations that have well-known meanings to those of ordinary skill in the art. Moreover, the corresponding entries, such as measurements and other parameters, in the MICMI II database are features in accordance with the invention.
All patients with sufficient data in the MIMIC II database, except those that spent any time in the neonatal ICU, were included in the development. Patients who met at least two of the four conditions for SIRS simultaneously at some point in their stay were identified from the database. (The four conditions are a temperature of less than 36° C. or greater than 38° C., a heart rate of greater than 90 beats per minute, a respiratory rate of greater than 20 breaths per minute, and a white blood cell (WBC) count of less than 4,000 per microliter or greater than 12,000 per microliter.) We checked for the occurrence of SIRS using all 6 possible 2-condition cases for each patient during their ICU stays without repetition at any given time using time-stamped chart times. The occurrence of SIRS is modeled as a point process which requires that the 2 or more SIRS conditions occur simultaneously. Heart rate was extracted from item id 211 in the chart events table. Respiration rate measurement was extracted by item ids 219, 615, and 618 in the chart events table. Temperatures were extracted from item ids 676, 677, 678, and 679 in the chart events table. Finally, WBC measurements were extracted from item ids 50316 and 50468 in the lab events table. Where multiple sources of a measurement were available, the one most recently updated at the time point was used.
Each time SIRS conditions occurred in a patient, we recorded the time stamped date and time of the SIRS occurrence and the patient id. We used the timestamps of positive patients to collect data from the 7 tables as described above 6, 12, 24, and 48 hours (the “time point”) before the occurrence of SIRS, using the most recent data nearest the time point for each patient, but no later than 1 week before the onset or any time before the current stay. For all patients for which no SIRS occurrence was found (SIRS negative patients), we recorded their ids. Using their ids, we collected data for 6, 12, 24 and 48 hours before some point in their last recorded stay. The ids for positive patients and negative patients are disjoint sets. The numbers of positive, negative, and total patients for the 48-hour time point was 9,029, 5,249, and 14,278, respectively; for the 24-hour time point 11,024, 5,249, and 16,273; for the 12-hour time point 13,033, 5,249, and 18,282; and for the 6-hour time point 15,075, 5,249, and 20,324. These numbers are different at different time points (and grow for shorter times) because fewer patients were present in the ICU 48 hours before the onset of SIRS than were present 6 hours before the onset of SIRS.
Data were normalized to a mean of zero and standard deviation of one. That is, a normalized version of each datum was created by subtracting the mean for each feature (taken across all occurrences for each feature or measurement type) and divided by the standard deviation (taken across the same distribution). The distribution of each feature property in the data was compared between the positives (patients who met the criteria for SIRS) and negatives (those that did not) at each of the four time points using the Bhattacharyya distance. That is, a histogram giving the population of SIRS-positive patients as a function of the measured value of some feature was compared to the same histogram but for SIRS-negative patients, and the Bhattacharyya distance was computed between these two histogram distributions. Any feature whose Bhattacharyya distance was less than 0.01 at all four time points was removed from further consideration. See Bhattacharyya (1943). The list of features after this step and used in the next steps of the analysis, as well as the mean and standard deviation of each feature, is shown in Table 1.
Machine learning was carried out with the scikit.learn package under Python language version 2.7 running in the Windows operating system within the environment Anaconda. In addition, we used the statistical software package R 3.1.2 (64-bit version) under Windows to perform tasks in data preparation and analysis. This scikit.learn package version 0.16.0 is designed to produce machine learning models for the purpose of classification and regression on dense and sparse datasets. The following classifiers were used: Nearest Neighbors, Linear SVM (support vector machine), RBF SVM (radial basis function support vector machine), Decision Trees, Random Forest (RF), AdaBoost, Naive Bayes, and Logistic Regression (LR). The best classifier can be selected through model and parametric optimization. For some applications the best classifier might be that with the highest accuracy among all the classifiers tested. For other applications the best classifier might be the one with the highest positive predictive value (PPV), negative predictive value (NPV), specificity, selectivity, area under the curve (AUC), as defined below, or some other combination of performance attributes. For the examples presented here, accuracy was generally used to rate classifiers. Because the Logistic Regression performed very well, the machine learning results presented here use it unless otherwise stated. Although the foregoing is what we used for our work, a person of ordinary skill in the art would readily appreciate that many other machine learning concepts and algorithms could equally be used and applied in the methods of the invention, including but not limited to artificial neural networks (ANN), Bayesian statistics, case-based reasoning, Gaussian process regression, inductive logic programming, learning automata, learning vector quantization, informal fuzzy networks, conditional random fields, genetic algorithms (GA), Information Theory, support vector machine (SVM), Averaged One-Dependence Estimators (AODE), Group method of data handling (GMDH), instance-based learning, lazy learning, and Maximum Information Spanning Trees (MIST). Moreover, various forms of boosting can be applied with combinations of methods.
Some of these learning methods require additional parameters to run. For the complexity parameter in SVM and LR in separate runs we used values ranging from 0.0001 to 1000 by powers of ten. The same set of values were also applied to the gamma parameter in RBF SVM. The Decision Tree method was used with a maximum depth of 10, Adaboost had a minimum number of estimators equal to 50, and RF had a minimum of 50 estimators.
In a typical machine learning calculation set, and as we used here, independent of which classifier was being used, the original dataset was split in a random fashion into 2 datasets: a training dataset and a testing dataset, with the training dataset containing a random 80% of the data instances (an individual patient acquiring SIRS at a specific time [positive] or not [negative]) and the testing dataset containing the remaining 20% of the data. This separation of training from testing data is typical in supervised machine learning applications, so that the model developed in the training phase can be evaluated in the testing phase on data to which it has not previously been exposed (e.g., the testing data is equivalent to patients to whom the model has no exposure initially, but the model will make predictions about those patients after exposure to the training data, and then those predictions can be evaluated by comparing them to the testing data itself that represents those patients).
For each of these classifiers, the model parameters that determine their predictive model were computed on the basis of the training dataset. For the logistic regression results reported here, the parameters for each resulting model are one coefficient for each data feature in the model plus a single bias value. A data feature is a type of measurement (systolic blood pressure measurement, for example). As shown in the equation below, a linear combination of coefficients (wj) and normalized data features (patient_datai,j), together with the bias (b) produces the prediction.
Each classifier model was then used, with its own respective set of parameters obtained from the training dataset (as described above), and was evaluated on the testing dataset and prediction results were expressed in the form of accuracy, positive predictive value (PPV), sensitivity, specificity, negative predictive value (NPV), and area under the curve (AUC), as defined below.
The logistic regression was selected for its excellent accuracy, positive predictive value and its robustness. See Yu et al. (2011). Several random combinations of training and test datasets were used to reproduce the results. This strategy was used to eliminate the possibility that results were due to a serendipitous selection of the test dataset. The logistic regression model results presented here were run with complexity parameter set equal to 0.005 and penalty L2.
Predictions are made from the logistic regression model using the following equation:
where P(SIRS|patient_datai) is the probability that a particular patient i presenting normalized patient data represented by the vector patient_datai will develop SIRS at the corresponding time point in the model, given the model bias parameter b and model coefficients Wj corresponding to the normalized patient feature measurements (of which there are num_features, indexed by j) patient_datai,j.
In the work presented here a probability of greater than 50% (one-half) results in a prediction of the patient having SIRS at the corresponding future time point, and a probability less than or equal to 50% (one-half) is a prediction of not having SIRS. As one of ordinary skill in the art would appreciate, it is straightforward to apply more sophisticated treatments of this probability to assign finer grained priorities to the possibility and severity of a condition. For example, one could use the probability directly as a measure of the predicted probability of developing SIRS, where, rather than a binary prediction of which patients will or will not develop SIRS, one could map the probabilities to categories such as “highly likely to develop SIRS,” “probably will develop SIRS,” “could develop SIRS,” “unlikely to develop SIRS,” and “highly unlikely to develop SIRS.” These finer grained priorities may be especially useful to hospitals in taking action on the predictions.
A machine learning algorithm can be used to generate a prediction model based on a patient population dataset. However, there is a tremendous amount of data in the patient population dataset, much of which is not necessary or provides little contribution to the predictability of a particular disease for which the prediction model is being trained. Additionally, it is often the case that different particular patients only have available data for different respective subsets of all of the features of the datasets, so that a prediction model based on all of the features of the patient population dataset might not be usable for particular patients or might output suboptimal predictions for the particular patients. An example embodiment of the present invention identifies a plurality of subsets of features within the totality of features of the patient population dataset for which to produce respective prediction models, that can be used to predict a disease, e.g., SIRS, based on data of only the respective subset of features.
Thus, in an example embodiment of the present invention, a computer system is provided with a patient population dataset, from which the system selects a plurality of subsets, each subset being used by a machine learning algorithm, which is applied by the system to the respective subset, to train a new prediction model on the basis of which to predict for a patient onset of a disease, e.g., SIRS. Thus, for each selected subset, a respective prediction model can be trained, with each of the trained prediction models being subsequently applied to an individual patient’s data with respect to the particular group of features of the subset for which the respective prediction model had been trained.
Thus, according to the example embodiment, in a preliminary selection step, a feature selection method is applied to select relevant subsets of features for training respective prediction models. In an example embodiment, prior to application of the feature selection method (or, viewed differently, as a first step of the feature selection method), features are initially removed from the dataset based on Bhattacharyya distance as described above. Then, from those features not removed based on the Bhattacharyya distance, the system proceeds to select groups of relevant features to which to apply a machine learning algorithm, where the machine learning algorithm would then generate a respective prediction model based on data values of the selected relevant features of each of one or more of the groups.
The feature selection method includes computing the correlation between each feature at a given time point and the output array (-1 for negatives [patients who had not developed SIRS]; +1 for positives [patients who had developed SIRS at the target time]), and computing the correlation between all pairs of features at a given time point. Iteratively, a feature was selected as a primary feature at a time point if it had the greatest correlation with the output array amongst all of the remaining features for that time point (6, 12, 24, or 48 hours). Then, for that time point, all others of the remaining features that had a correlation of 60% or greater (when taken across patients) with the most recently selected primary feature at that time point (i.e., the primary feature selected for the present iteration) were selected as a secondary feature associated with that primary feature and time point. For example, in an example embodiment, for each feature, a vector is generated populated with a value for the respective feature for each of a plurality of patients of a patient population, and correlation is determined between the vector of the selected primary feature and the remaining feature vectors. The vectors can further be indicated to be associated with negatives or with positives.
All secondary features thus selected were then removed from the set of remaining features (so that once a feature is selected as a primary or secondary feature, it can no longer be selected as a primary feature in a subsequent iteration). This selected primary feature and its associated secondary features were together considered a feature group. Because of the method used to select a feature group, the members of a particular feature group had some correlation with the output (whether patients developed SIRS at a specific time in the future) and some correlation amongst themselves. Thus, they are expected to be useful in the prediction of SIRS, but members within a feature group might be partially redundant owing to their correlation amongst themselves. This process was repeated iteratively, first picking an additional primary feature at the same time point and then its associated secondary features at that time point (which together produced an additional feature group).
In an example embodiment, the iterative feature selection method is discontinued as soon as it is determined that the remaining unselected features have essentially no predictive power, as indicated by machine learning, e.g., with an AUC very close to 0.50 (such as 0.50 ± 0.05). For example, the system selects a primary feature and its secondary features as a new feature subset. The system then applies machine learning to the combination of all of the remaining features of the patient population dataset. If the machine learning produces an operable prediction model based on those remaining features, then the system continues on with another iteration to find one or more further subsets of those remaining features that can be used alone. On the other hand, if the machine learning does not produce an operable prediction model based on those remaining features, then the iterative selection method is ended. Once the iterative selection method is ended, the system applies a machine learning algorithm to each of one or more, e.g., each of all, of the individual feature subsets that had been selected by the iterative feature selection method to produce respective prediction models.
In an example embodiment, this process is carried out separately for each of a plurality of values of a particular constraint, e.g., time points. For example, in an example, this method was performed for each of the noted four onset time points of 6, 12, 24, and 48 hours.
As one of ordinary skill in the art would readily appreciate, the above Machine Learning on Data and Feature Selection methods were carried out using the MIMIC II database, but the same methods of the invention could be utilized on another database from other hospitals to achieve the results of the invention, including identification of primary, secondary and additional features, exemplified here with the MICMI II database.
The feature selection method was applied to the entire patient population dataset. Once the relevant features were selected in this manner, the patient population dataset was divided into the training dataset and the testing dataset for performing the training and testing steps.
The performance results for machine learning with the linear support vector machine method (with the complexity parameter C=0.001) using data associated with the features in Table 4 are shown in Table 2. Results for four separate sets of calculations are presented in this Table 2, each set corresponding to a respective onset time period. For each of the respective onset time periods, the table shows results of a calculation generated based on features grouped as primary and secondary features in the center column and results of calculations generated based on “remaining features” that were not removed in the Bhattacharyya procedure. The results show that the former calculations are predictive and the latter calculations are not predictive. Table 3 shows further details of this “remaining features” set for the 48-hour dataset. Each calculation used a different set of data (collected 6, 12, 24, or 48 hours in advanced of the onset of SIRS for the positive patients). For each of the four sets of calculations, the results show that using only the data associated with the features in Table 4, accurate predictions could be made regarding which patients would and which would not develop SIRS, as judged by statistical measures familiar to the machine learning community and a person of ordinary skill in the art, such as accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area under the curve (AUC). In addition, the true positive rate (TP), true negative rate (TN), false positive rate (FP), and false negative rate (FN) are given. Parallel experiments using only feature data that were associated with features not primary or secondary for the associated time points dataset and not removed by the Bhattacharyya procedure (“remaining features”) were unable to make accurate predictions regarding which patients would and which would not develop SIRS at the selected time, demonstrating the effectiveness of the invention. This is indicated by an area under the curve of very close to 50% when the remaining features were used.
The meanings for various terms in Table 2 (and also used elsewhere in this application) are standard in the machine learning literature and are well known to one of ordinary skill in the art, but are referenced here in exemplary form for the sake of completeness. True positives (TP) are patients, whether in the training set or testing set, and whether historical or prospective, who are predicted to develop SIRS (generally in a given time window) and who do subsequently develop SIRS (generally in that given time window). True negatives (TN) are patients predicted not to develop SIRS who do not subsequently develop SIRS. False positives (TP) are patients predicted to develop SIRS but who do not subsequently develop SIRS, and false negatives (FN) are patients predicted to not develop SIRS but who subsequently do develop SIRS. Among any set of patients for whom predictions are made, the accuracy statistic is the total number of correct predictions divided by the total number of predictions made. Thus, accuracy can be represented as (TP+TN)/(TP+FP+TN+FN). Accuracy, and the other statistics described and used here, are often represented as a percentage (by multiplying by 100 and adding the percentage symbol). Sensitivity is the fraction of patients who subsequently develop SIRS who are correctly predicted, and can be represented as TP/(TP+FN). Specificity is the fraction of patients who subsequently do not develop SIRS who are correctly predicted, and can be represented as TN/(TN+FP). Positive predictive value (PPV) is the fraction of positive predictions that are correct, and can be represented as TP/(TP+FP). Negative predictive value (NPV) is the fraction of negative predictions that are correct, and can be represented as TN/(TN+FN). Area under the curve (AUC) is the area under the receiver operating characteristic (ROC) curve, which is a plot of sensitivity, on the y-axis, against (1-specificity), on the x-axis, as the discrimination threshold is varied. It is a non-negative quantity whose maximum value is one. Different machine learning methods have their own mechanism of varying the discrimination threshold. In logistic regression that can be achieved by changing the threshold probability between calling a prediction negative and positive (nominally 0.5), for example by progressively varying it from zero to one, which then maps out the ROC curve.
Together, this evidence shows that the features in Table 4 have value in machine learning prediction of patients who will and who will not develop SIRS in the next 6-to-48 hours, and that other features in the dataset do not. That is, these features selected by the feature selection method described above, when applied to a machine learning algorithm, cause the machine learning algorithm to generate a good prediction model, the prediction model accepting patient-specific values for those selected features to predict likelihood of the respective specific patients developing SIRS.
An example of a poorly predictive model with area under the curve of 50% is shown in Table 3 (48-hour data, remaining features only [not primary features for 48-hour data, not secondary features for 48-hour data, and not features removed in the Bhattacharyya procedure]). Most of the features have a coefficient value of zero, indicative of model that had difficulty learning from the training data. This indicates that the remaining features set, listed in the Table 3, was not sufficiently informative to predict SIRS occurrence.
From the 48-hour dataset, 32 features in total were selected (20 primaries and 12 secondaries; some of the primary features had no associated secondary features). Each primary feature and its associated secondary features (if any) made up a “feature group,” giving 20 initial feature groups. An additional 16 features total were added to this feature set to optimize performance on the 6-, 12-, and 24-hour datasets. For example, feature groups that were selected for a different onset period can be selected. These additional features made up a 21st feature group, called “additional” features. For example, in an example embodiment, the system applies the feature selection method described above based on data associated with positives and negatives of developing a disease within each of a plurality of time frames, to identify respective relevant subsets of features for predicting onset of the disease in the respective time frame. This may result in identification of a feature subset as relevant for predicting onset within a first of the time frames, which feature subset had not been identified as relevant for predicting onset within one or more others of the time frames. Nevertheless, in an example embodiment, even if a feature subset had not been selected for prediction of onset within a particular time frame, if the feature subset had been selected for a different time frame, it is used for training a prediction model even for the time frame for which it had not been selected. (If it is subsequently determined that the generated model does not yield satisfactory prediction results for the time frame, then it is discarded as it relates to that time frame.)
Table 4 shows the selected features organized by feature group, including their identifier in the MIMIC II database, the role they play (as primary, secondary, or additional features), and a brief description. Using a separate database, the same procedure as detailed above could be used to identify and select primary and secondary features and additional features from that separate database, using the above methods of the invention, which is within the scope of the invention. Likewise, such separate measurements are within the meaning of the term “feature” as used in this application. Further, as explained above, if a given hospital measures a feature in different units or uses a different type of measurement for the same feature as compared to the MIMIC II database, those data are also “features” as defined herein and can be used in the above selection and prediction methods of the invention, which is within the scope of the invention. As used herein, a “MIMIC II feature” is a feature (whether primary, secondary, additional or remaining) from the MIMIC II database, while a “feature” includes such MIMIC II features and other features that may be identified and/or selected from other hospital databases, in accordance with the invention and as described herein. Such features from other databases are also termed primary, secondary, additional and remaining in accordance with the methods of the invention.
Because the features within the first 20 feature groups are correlated with each other (especially within feature groups with more than one feature), the inventors carried out a further set of experiments in which we chose two features from each of the first 14 feature sets (but only one feature from feature sets that had only one feature) and two features from the additional set, and tested their predictive ability. Ten independent experiments of this type were carried out using the same features used in the model, but different random divisions of the data into training and testing data. Machine learning as above on the training sets was used to create a model that was then tested on the testing set (containing the patients the model had not seen). The scores on each of ten the testing set are reported in Table 5 for each of the four time points, together with the features in that dataset and the predictive model resulting from the training that produced these results. The results show all of the models have very good predictive capabilities, even though each of the respective models may differ from one another. This is consistent with the features being powerfully useful for accurate prediction.
As shown in the below examples, depending on the number of features identified using the above methods of the invention, the accuracy of predicting whether or not SIRS will occur can be predicted at 60% or greater, more preferably 70% or greater and most preferably 80% or greater. Predictions of patients likely to develop SIRS can lead to improved healthcare outcomes and reduced cost by appropriate monitoring and intervention.
Machine learning was applied to the MIMIC II database as described above, using logistic regression on the 48-hour dataset, using feature sets of five features selected from the first 20 groups of Table 4. Machine learning models developed on a training dataset produced a wide range of accuracies when applied to a testing dataset, from above 80% to below 70%, depending on the particular feature set used in the learning, as shown in Table 6.
Machine learning was applied to the MIMIC II database as described above, using logistic regression on the 48-hour dataset, using feature sets of one and two features selected from the first 20 groups of Table 4. Machine learning models developed on the training dataset produced useful accuracies when applied to the testing dataset, as shown in Table 7.
Using the invention, the probability of SIRS onset within a given time window for a given patient can be determined. The methods deployed here show methods for building predictive models for which patients will and which will not develop SIRS in a given time frame using a relatively small number of features (patient data measurements) pared down from the much larger number frequently available in a hospital database, such as the MIMIC II database. The models developed and shown here can be used directly to make predictions on hospital patients. One merely needs to acquire measurements of data for a particular patient corresponding to the features in the model, normalize them as shown here, use the model parameters (bias b and coefficients wj), and apply the logistic regression formula to produce a probability of SIRS in the patient at the time point indicated by the model (6, 12, 24, or 48 hours). If the probability is greater than 50% (one-half), then SIRS is predicted; otherwise, it is not. As illustrated above, the probability can be used in a multitude of ways to assign a more fined grained classification of the likelihood of the patient developing SIRS.
The unexpectedly high predictive accuracy for SIRS of the methods of the invention has been shown in this application, for example, by the above accuracy and other determinations in the Predictive Results of Tables 2, 5, 6, and 7. The unexpectedly high predictive accuracy with relatively small sets of feature measurements has also been shown in this application. For example, using the features of Set 1 in Table 6, the method of the invention resulted in an 83.67% value for Accuracy regarding onset of SIRS in a 48-hour model. At its most general terms, this indicates that when the features of that Set 1 were applied to the above model based on the MIMIC II database, the predicted probability (yes or no) of the onset of SIRS at 48 hours resulted in 83.67% Accuracy. In other words, the Set 1 features were applied to the 80% of data designating as training data according to the above method to determine the probability of SIRS onset at 48 hours using those features, and the Accuracy result of 83.67% was determined against the 20% test data relative to those same features and whether or not SIRS occurred at 48 hours, as a person of ordinary skill in the art would appreciate.
Rather than use the precise models presented here directly, one can use the methods here to produce new models, using available hospital data (for example, historical or retrospective data from the previous few weeks, months, or years at the same or similar hospital or hospital system) and apply the methods of the invention to identify feature sets and models, and then to apply them as described here. The methods shown here can be used to prepare the data, select features, and carry out machine learning to produce models and evaluate the predictive ability of those models. The methods shown here can then be used to apply those models to make predictions on new patients using current measurements on those new patients.
For example, with regard to a patient who walks in the door of a hospital for assessment, the invention can be applied in the following manner relative to the MIMIC II database features. The patient’s data can be obtained for the various primary, secondary, and additional features over the course of time and in the ordinary course of the patient’s stay in the hospital. To the extent that the obtained measurements match any of the above models and their Parameter Sets, the method of the invention and the above models can be applied to the patient’s features to determine the probability of the patient developing SIRS at 6, 12, 24 or 48 hours in the future. For example, if one has the measurement corresponding to lab 50019 (Set 3 from Table 7), one can make a prediction using that patient measurement, normalizing, and applying the coefficient and bias from the table to produce a probability of SIRS onset 48-hours into the future from when the measurement was taken. If one has the measurement corresponding to lab 50019 and that corresponding to io 102 (Set 1 from Table 7), then one can make a prediction using those two patient measurements, normalizing, and applying the coefficients and bias from the table to produce a probability of SIRS onset 48-hours into the future from when the measurements were taken. From the results in Table 7, this two-feature model is expected to be more accurate than the one-feature model using only feature 50019 (Accuracy of 71.46% rather than 66.86%). If the model predicts such a probability of the onset of SIRS, the hospital can advantageously begin treating the patient for SIRS or sepsis before the onset of any symptoms, saving time and money as compared to waiting for the more dire situation where SIRS or sepsis symptoms have already occurred.
Alternatively, as features of the patient are ascertained during his or her stay at the hospital, new models can be created based on those features as described above (using the MIMIC II database) and tested for predictive accuracy in terms of the probability of SIRS onset in the patient. That is, if a patient’s measurements correspond to a combination of features for which a model hasn’t previously been trained, one can use methods described here to train such a model using historical (past) data with those features only. One can test those models on historical (past) testing set data as described here. One can assess the accuracy and other metrics quantifying the performance of the model on patients in the testing set as described here. Finally, one can then apply the model to the new patient or to new patients as described here. In this case, as in the others described here, treatment of the patient or patients for SIRS or sepsis can be advantageously initiated before the onset of SIRS or sepsis if the model predicts that it is probable the patient will have SIRS 6, 12, 24, or 48 hours in the future. Alternatively, a hospital could base the decision on whether to begin treatment for SIRS or sepsis in an asymptomatic patient based on the relative Predictive Results of the model (e.g., such treatment would begin in an asymptomatic patient for SIRS that the model of the invention predicts is probable for developing SIRS at a given time if the Predictive Results show an Accuracy of greater than 60% or greater than 70% or greater than 80%, etc.). For example, using a model with accuracy of 60-70% a given hospital may choose to only initiate treatment if the model predicts a 90% or greater probability of developing SIRS, but using a model with accuracy of 70-80% the same hospital may choose to initiate treatment if the model predicts an 80% or greater probability of developing SIRS, and using a model with accuracy of greater than 80% the same hospital may choose to initiate treatment if the model predicts a 70% or greater probability of developing SIRS.
On the other hand, a patient could walk in the door of a hospital that measures features in a manner that is different from that of the MIMIC II database (or some features are the same and one or more features are different in terms of units or a different measurement that is used to assess the same aspect of a patient or a different dose of the same or different medication is used to treat the same aspect of a patient, etc.). First, the features that are different than the MIMIC II features can be mapped to the MIMIC II features by recognizing the similarity of what the measurement achieves (for example, different ways of measuring blood urea [group 2], glucose levels [group 3], cholesterol [group 16], and blood coagulability [chart 815 in group 18]). Then the above models or new models can be used in accordance with the invention to assess the probability of SIRS onset at a given time in the future, with advantageous early treatment being applied as set forth in the above paragraph. For example, simply developing new normalization parameters for new measurements using the method for how normalization was carried out here would allow new measurements to be incorporated into the models presented here. Alternatively, if there is an existing database for the particular hospital that uses features other than MIMIC II features (or a mixture of MIMIC II features and other features), new models can be prepared in accordance with the methods of the invention to select primary, secondary, and additional features from that database that can be used to predict the probability of SIRS onset in a patient in accordance with the methods of the invention described herein. As described here, features would be eliminated and selected, data normalized, and models built and tested using the methods disclosed in this application. The patient’s data then can be obtained for these various primary, secondary, and additional features over the course of time and in the ordinary course of the patient’s stay in the hospital. These new models prepared using the hospital’s database can be applied to the patient’s features to determine the probability of the patient developing SIRS at 6, 12, 24, or 48 hours in the future. Patient measurements can be normalized, inserted into the model, and the model would then make a prediction regarding the probability of the onset of SIRS. Alternatively, as features of the patient are ascertained (measured) during his or her stay at the hospital, new models can be created based on those features in accordance with the methods described above (using the hospital’s database) and tested for predictive accuracy in terms of the probability of SIRS onset in the patient using historical (past) patients at the same or similar hospital or hospital system, as described above. New measurements for the patient can be used in these new models to predict the probability of the onset of SIRS in the new patient. In either case, treatment of the patient for SIRS can be advantageously initiated before the onset of SIRS if the model predicts that it is probable the patient will have SIRS 6, 12, 24, or 48 hours in the future. Alternatively, a hospital could base the decision on whether to begin treatment for SIRS in an asymptomatic patient based on the relative Predictive Results of the model (e.g., such treatment would begin in an asymptomatic patient for SIRS that the model of the invention predicts is probable for developing SIRS at a given time if the Predictive Results show an Accuracy of greater than 60% or greater than 70% or greater than 80%, etc.). For example, using a model with accuracy of 60-70%, a given hospital may choose to only initiate treatment if the model predicts a 90% or greater probability of developing SIRS, but using a model with accuracy of 70-80%, the same hospital may choose to initiate treatment if the model predicts an 80% or greater probability of developing SIRS, and using a model with accuracy of greater than 80%, the same hospital may choose to initiate treatment if the model predicts a 70% or greater probability of developing SIRS.
In another example embodiment of the invention, a hospital, medical center, or health care system maintains multiple models simultaneously. The measurements for a patient can be input into multiple models to obtain multiple probabilities of the onset of SIRS at the same or different times in the future. These different predictive probabilities can be combined to develop an aggregate likelihood or probability of developing SIRS and an action plan can be developed accordingly. For example, the different models could vote as to whether they expected SIRS onset within a given timeframe, and the aggregate prediction could be made based on the outcome of this voting scheme. The voting can be unweighted (each model receives an equal vote), or weighted based on the accuracy or other quantitative metric of the predictive abilities of each model (with more accurate or higher quality models casting a higher proportional vote).
In yet another example embodiment of the invention, one can use multiple models and base a prediction on the first one for which a sufficient number of measurements have been obtained for the current patient. In another aspect of the invention, in any of the embodiments described, the parameters for a model can be re-computed (updated) using additional data from the greater number of historical patients available as time progresses. For example, every year, every month, every week, or every day, an updated database of historical (past) patients can be used to retrain the set of models in active use by creating a training and testing dataset from the available past data, training the models on the training data, and testing them to provide quantitative assessment on the testing data as described here.
An example embodiment of the present invention is directed to one or more processors, which can be implemented using any conventional processing circuit and device or combination thereof, e.g., a Central Processing Unit (CPU) of a Personal Computer (PC) or other workstation processor, to execute code provided, e.g., on a hardware computer-readable medium including any conventional memory device, to perform any of the methods described herein, alone or in combination. For example, in an example embodiment, the circuitry interfaces with a patient population database, obtaining therefrom data, and executes an algorithm by which the circuitry generates prediction models, as described above. In an example embodiment, the circuitry generates the models in the form of further executables processable by the circuitry (or other circuitry) to predict onset of a disease (or diagnose a disease) based on respective datasets of a respective patient. In an alternative example embodiment, the algorithms are programmed in hardwired fashion in the circuitry, e.g., in the form of an application specific integrated circuit (ASIC). The one or more processors can be embodied in a server or user terminal or combination thereof. The user terminal can be embodied, for example, as a desktop, laptop, hand-held device, Personal Digital Assistant (PDA), television set-top Internet appliance, mobile telephone, smart phone, etc., or as a combination of one or more thereof. The memory device can include any conventional permanent and/or temporary memory circuits or combination thereof, a non-exhaustive list of which includes Random Access Memory (RAM), Read Only Memory (ROM), Compact Disks (CD), Digital Versatile Disk (DVD), and magnetic tape.
An example embodiment of the present invention is directed to one or more hardware computer-readable media, e.g., as described above, on which are stored instructions executable by a processor to perform the methods described herein.
An example embodiment of the present invention is directed to the described methods being executed by circuitry, such as that described above.
An example embodiment of the present invention is directed to a method, e.g., of a hardware component or machine, of transmitting instructions executable by a processor to perform the methods described herein.
The above description is intended to be illustrative, and not restrictive. Those skilled in the art can appreciate from the foregoing description that the present invention may be implemented in a variety of forms, and that the various embodiments may be implemented alone or in combination. Therefore, while the embodiments of the present invention have been described in connection with particular examples thereof, the true scope of the embodiments and/or methods of the present invention should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the specification and following claims.
1. U.S. Pat. No. 7,645,573 (“the ‘573 patent”).
2. U.S. Pat. No. 8,029,982 (“the ‘982 patent”).
3. U.S. Pat. No. 8,527,449 (“the ‘449 patent”).
4. U.S. Pat. No. 8,697,370 (“the ‘370 patent”).
5. U.S. Pat. App. Pub. No. 2010/0190652 (“the ‘652 publication”).
6. U.S. Pat. App. Pub. No. 2013/0004968 (“the ‘968 publication”).
7. U.S. Pat. App. Pub. No. 2014/0248631 (“the ‘631 publication”).
8. U.S. Pat. App. Pub. No. 2015/0024969 (“the ‘969 publication”).
9. Int. Pat. App. Pub. No. WO 2013119869 (“the ‘869 publication”).
10. Int. Pat. App. Pub. No. WO 2014022530 (“the ‘530 publication”).
1. Angus et al., Epidemiology of severe sepsis in the United States: Analysis of incidence, outcome, and associated costs of care, Crit Care Med. 29:1303-1310 (2001).
2. Angus et al., Severe sepsis and septic shock, N Engl J Med 2013; 369:840-851 (2013).
3. Annane et al., Septic shock, Lancet 365: 63-78 (2005).
4. Balci et al., Procalcitonin levels as an early marker in patients with multiple trauma under intensive care, J Int Med Res. 37:1709-17 (2009).
5. Barriere et al., An overview of mortality risk prediction in sepsis, Crit Care Med. 23:376-93 (1995).
6. Begley et al., Adding Intelligence to Medical Devices Medical Device & Diagnostic Industry Magazine (Mar. 1, 2000).
7. Bennett et al., Artificial intelligence framework for simulating clinical decision-making: A Markov decision process approach, Artificial Intelligence in Medicine, In Press (2013).
8. Bernstein, Transthyretin as a marker to predict outcome in critically ill patients, Clinical biochemistry 41:1126 -1130 (2008).
9. Bhattacharyya, On a measure of divergence between two statistical populations defined by their probability distributions, Bulletin of the Calcutta Mathematical Society, 35:99-109 (1943).
10. Bone et al., Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis, Chest, 101:1644-1655 (1992).
11. Bracho-Riquelme et al., Leptin in sepsis: a well-suited biomarker in critically ill patients?, Crit Care. 14(2): 138 (2010).
12. Brandt et al., Identifying severe sepsis via electronic surveillance, Am J Med Qual. 30:559-65 (2015).
13. Brause et al., Septic shock diagnosis by neural networks and rule based systems, In: L.C. Jain (ed.), Computational Intelligence Techniques in Medical Diagnosis and Prognosis, Springer Verlag, New York, pp. 323-356 (2001).
14. Carrigan et al., Toward resolving the challenges of sepsis diagnosis, Clinical Chemistry 50:8, 1301-1314 (2004).
15. Comstedt et al., The Systemic inflammatory response syndrome (SIRS) in acutely hospitalised medical patients: a cohort study, Scand J Trauma Resusc Emerg Med. 17:67 (2009).
16. Dellinger et al., Surviving Sepsis Campaign: international guidelines for management of severe sepsis and septic shock: 2008, Crit Care Med. 36: 296-327 (2008).
17. Dellinger et al., Surviving Sepsis Campaign: international guidelines for management of severe sepsis and septic shock: 2012, Crit Care Med. 41:580-637 (2013).
18. Fialho et al., Predicting outcomes of septic shock patients using feature selection based on soft computing techniques, AMIA Annu Symp Proc. 653-662 (2012).
19. Giannoudis et al., Correlation between IL-6 levels and the systemic inflammatory response score: can an IL-6 cutoff predict a SIRS state?, J Trauma. 65:646-52 (2008).
20. Gultepe et al., From vital signs to clinical outcomes for patients with sepsis: a machine learning basis for a clinical decision support system, J Am Med Inform Assoc., 21:315-325 (2014).
21. Hanson et al., Artificial intelligence applications in the intensive care unit, Crit Care Med 29:427-434 (2001).
22. Hoeboer et al., Old and new biomarkers for predicting high and low risk microbial infection in critically ill patients with new onset fever: A case for procalcitonin, J Infect. 64:484-93 (2012).
23. Hohn et al., Procalcitonin-guided algorithm to reduce length of antibiotic therapy in patients with severe sepsis and septic shock, BMC Infect Dis. 13:158 (2013).
24. Hollenberg et al., Practice parameters for hemodynamic support of sepsis in adult patients: 2004 update, Crit Care Med. 32:1928-48 (2004).
25. Jekarl et al., Procalcitonin as a diagnostic marker and IL-6 as a prognostic marker for sepsis, Diagn Microbiol Infect Dis. 75:342-7 (2013).
26. Lai et al., Diagnostic value of procalcitonin for bacterial infection in elderly patients in the emergency department, J Am Geriatr Soc. 58:518-22 (2010).
27. Mani et al., Medical decision support using machine learning for early detection of late-onset neonatal sepsis, J Am Med Inform Assoc. 21: 326-336 (2014).
28. Marques et al., Preprocessing of Clinical Databases to improve classification accuracy of patient diagnosis, Preprints of the 18th IFAC World Congress Milano (Italy) (August 28 - Sep. 2, 2011).
29. Nachimuthu et al., Early Detection of Sepsis in the Emergency Department using Dynamic Bayesian Networks, AMIA Annu Symp Proc. 653-662 (2012).
30. Nierhaus et al., Revisiting the white blood cell count: immature granulocytes count as a diagnostic marker to discriminate between SIRS and sepsis--a prospective, observational study, BMC Immunol. 14:8 (2013).
31. Pittet et al., Systemic inflammatory response syndrome, sepsis, severe sepsis and septic shock: incidence, morbidities and outcomes in surgical ICU patients, Int Care Med. 21:302-309 (1995).
32. Pomi et al., Context-sensitive autoassociative memories as expert systems in medical diagnosis, BMC Medical Informatics and Decision Making, 6:39 (2006).
33. Rangel-Fausto et al., The natural history of the systemic inflammatory response syndrome (SIRS) A prospective study, JAMA 273:117-123 (1995).
34. Saeed et al., Multiparameter Intelligent Monitoring in Intensive Care II: A public-access intensive care unit database, Crit. Care Med., 39:952-960 (2011).
35. Selberg et al., Discrimination of sepsis and systemic inflammatory response syndrome by determination of circulating plasma concentration of procalcitonin, protein complement 3a and interleukin-6, Crit Care Med. 28:2793-2798 (2000).
36. Shapiro et al., The association of sepsis syndrome and organ dysfunction with mortality in emergency department patients with suspected infection, Ann Emerg Med. 48:583-590 (2006).
37. Sinning et al., Systemic inflammatory response syndrome predicts increased mortality in patients after transcatheter aortic valve implantation, Eur Heart J. 33:1459-68 (2012).
38. Tsoukalas et al., From data to optimal decision making: A data-driven, probabilistic machine learning approach to decision support for patients with sepsis, JMIR Med Inform 3(l):el 1 (2015).
39. Yu et al., Dual coordinate descent methods for logistic regression and maximum entropy models, Machine Learning 85:41-75 (2011).
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2017/023885 | 3/23/2017 | WO |
Number | Date | Country | |
---|---|---|---|
62312339 | Mar 2016 | US |