The present disclosure relates generally to systems and processes for assessing and differentiating asthma and chronic obstructive pulmonary disease (COPD) in a patient, and more specifically to computer-based systems and processes for providing a predicted diagnosis of asthma and/or COPD.
Asthma and chronic obstructive pulmonary disease (COPD) are both common obstructive lung diseases affecting millions of individuals around the world. Asthma is a chronic inflammatory disease of hyper-reactive airways, in which episodes are often associated with specific triggers, such as allergens. In contrast, COPD is a progressive disease characterized by persistent airflow limitation due to chronic inflammatory response of the lungs to noxious particles or gases, commonly caused by cigarette smoking.
Despite sharing some key symptoms, such as shortness of breath and wheezing, asthma and COPD are quite different in terms of how they are treated and managed. Drugs for treating asthma and COPD can come from the same class and many of them can be used for both diseases. However, the pathways of treatment and combinations of drugs often differ, especially in different stages of the diseases. Further, while individuals with asthma and COPD are encouraged to avoid their personal triggers, such as pets, tree pollen, and cigarette smoking, some individuals with COPD may also be prescribed oxygen or undergo pulmonary rehabilitation, a program that focuses on learning new breathing strategies, different ways to do daily tasks, and personal exercise training. As such, accurate differentiation of asthma from COPD directly contributes to the proper treatment of individuals with either disease and thus the reduction of exacerbations and hospitalizations.
In order to differentiate between asthma and COPD in patients, physicians typically gather information regarding the patient's symptoms, medical history, and environment. After gathering patient information and data using available processes and tools, the differential diagnosis between asthma and COPD ultimately falls on the physician and thus can be affected by the physician's experience or knowledge. Further, in cases where an individual has long-term asthma or when the onset of asthma occurs later in an individual's life, differentiation between asthma and COPD becomes much more difficult—even with available information and data—due to the similarity of asthma and COPD case histories and symptoms. As a result, physicians often misdiagnose asthma and COPD, resulting in improper therapy, increased morbidity, and decrease of patient quality of life.
Accordingly, there is a need for a more reliable, accurate, and reproducible system and process for differentiating asthma from COPD in patients that does not rely primarily on the experience or knowledge available to physicians.
Systems and processes for the diagnostic application of one or more diagnostic models for differentiating asthma from chronic obstructive pulmonary disease (COPD) and providing a predicted diagnosis of asthma and/or COPD are provided. In accordance with one or more examples, a computing device comprises one or more processors, one or more input elements, memory, and one or more programs stored in the memory. The one or more programs include instructions for receiving, via the one or more input elements, a set of patient data corresponding to a first patient, the set of patient data including at least one physiological input based on results of at least one physiological test administered to the first patient. The one or more programs further include instructions for determining, based on the set of patient data, whether a set of one or more data-correlation criteria are satisfied, wherein the set of one or more data-correlation criteria are based on an application of an unsupervised machine learning algorithm to a first historical set of patient data that includes data from a first plurality of patients having one or more phenotypic differences, the phenotypic differences including at least data regarding one or more respiratory conditions. The one or more programs further include instructions for determining, in accordance with a determination that the set of one or more data-correlation criteria are satisfied, a first indication of whether the first patient has one or more respiratory conditions selected from a group consisting of asthma and chronic obstructive pulmonary disease (COPD) based on an application of a first diagnostic model to the set of patient data, wherein the first diagnostic model is based on an application of a first supervised machine learning algorithm to a second historical set of patient data that includes data from a second plurality of patients having one or more phenotypic differences, the phenotypic differences including at least data regarding one or more respiratory conditions. The one or more programs further include instructions for outputting the first indication.
The one or more programs further include instructions for determining, in accordance with a determination that the set of one or more data-correlation criteria are not satisfied, determining a second indication of whether the first patient has one or more respiratory conditions selected from a group consisting of asthma and chronic obstructive pulmonary disease (COPD) based on an application of a second diagnostic model to the set of patient data, wherein the second diagnostic model is based on an application of a second supervised machine learning algorithm to a third historical set of patient data that includes data from a third plurality of patients having one or more phenotypic differences, the phenotypic differences including at least data regarding one or more respiratory conditions, and wherein the third historical set of patient data is different from the second historical set of patient data. The one or more programs further include instructions for outputting the second indication.
The executable instructions for performing the above functions are, optionally, included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
The following description sets forth exemplary systems, devices, methods, parameters, and the like. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure but is instead provided as a description of exemplary embodiments. For example, reference is made to the accompanying drawings in which it is shown, by way of illustration, specific example embodiments. It is to be understood that changes can be made to such example embodiments without departing from the scope of the present disclosure.
Attention is now directed to examples of electronic devices and systems for performing the techniques described herein in accordance with some embodiments.
Client system 102 is connected to a network 106 via connection 104. Connection 104 can be used to transmit and/or receive data from one or more other electronic devices or systems (e.g., 112, 126). The network 106 may include any type of network that allows sending and receiving communication signals, such as a wireless telecommunication network, a cellular telephone network, a time division multiple access (TDMA) network, a code division multiple access (CDMA) network, Global System for Mobile communications (GSM), a third-generation (3G) network, fourth-generation (4G) network, a satellite communications network, and other communication networks. The network 106 may include one or more of a Wide Area Network (WAN) (e.g., the Internet), a Local Area Network (LAN), and a Personal Area Network (PAN). In some examples, the network 106 includes a combination of data networks, telecommunication networks, and a combination of data and telecommunication networks. The systems and resources 102, 112 and/or 126 communicate with each other by sending and receiving signals (wired or wireless) via the network 106. In some examples, the network 106 provides access to cloud computing resources (e.g., system 112), which may be elastic/on-demand computing and/or storage resources available over the network 106. The term ‘cloud’ services generally refers to a service performed not locally on a user's device, but rather delivered from one or more remote devices accessible via one or more networks.
Cloud computing system 112 is connected to network 106 via connection 108. Connection 108 can be used to transmit and/or receive data from one or more other electronic devices or systems and can be any suitable type of data connection (e.g., wired, wireless, or any combination of wired and wireless). In some examples, cloud computing system 112 is a distributed system (e.g., remote environment) having scalable/elastic computing resources. In some examples, computing resources include one or more computing resources 114 (e.g., data processing hardware). In some examples, such resources include one or more storage resources 116 (e.g., memory hardware). The cloud computing system 112 can perform processing (e.g., applying one or more machine learning models, applying one or more algorithms) of patient data (e.g., received from client system 102). In some examples, cloud computing system 112 hosts a service (e.g., computer program or application comprising instructions executable by one or more processors) for receiving and processing patient data (e.g., from one or more remote client systems, such as 102). In this way, cloud computing system 112 can provide patient data analysis services to a plurality of health care providers (e.g., via network 106). The service can provide a client system 102 with, or otherwise make available, a client application (e.g., a mobile application, a web-site application, or a downloadable program that includes a set of instructions) executable on client system 102. In some examples, a client system (e.g., 102) communicates with a server-side application (e.g., the service) on a cloud computing system (e.g., 112) using an application programming interface.
In some examples, cloud computing system 112 includes a database 120. In some examples, database 120 is external to (e.g., remote from) cloud computing system 112. In some examples, database 120 is used for storing one or more of patient data, algorithms, machine learning models, or any other information used by cloud computing system 112.
In some examples, system 100 includes cloud computing resource 126. In some examples, cloud computing resource 126 provides external data processing and/or data storage service to cloud computing system 112. For example, cloud computing resource 126 can perform resource-intensive processing tasks, such as machine learning model training, as directed by the cloud computing system 112. In some examples, cloud computing resource 126 is connected to network 106 via connection 124. Connection 124 can be used to transmit and/or receive data from one or more other electronic devices or systems and can be any suitable type of data connection (e.g., wired, wireless, or any combination of wired and wireless). For example, cloud computing system 112 and cloud computing resource 126 can communicate via network 106, and connections 108 and 124. In some examples, cloud computing resource 126 is connected to cloud computing system 112 via connection 122. Connection 122 can be used to transmit and/or receive data from one or more other electronic devices or systems and can be any suitable type of data connection (e.g., wired, wireless, or any combination of wired and wireless). For example, cloud computing system 112 and cloud computing resource 126 can communicate via connection 122, which is a private connection.
In some examples, cloud computing resource 126 is a distributed system (e.g., remote environment) having scalable/elastic computing resources. In some examples, computing resources include one or more computing resources 128 (e.g., data processing hardware). In some examples, such resources include one or more storage resources 130 (e.g., memory hardware). The cloud computing resource 126 can perform processing (e.g., applying one or more machine learning models, applying one or more algorithms) of patient data (e.g., received from client system 102 or cloud computing system 112). In some examples, cloud computing system (e.g., 112) communicates with a cloud computing resource (e.g., 126) using an application programming interface.
In some examples, cloud computing resource 126 includes a database 134. In some examples, database 134 is external to (e.g., remote from) cloud computing resource 126. In some examples, database 134 is used for storing one or more of patient data, algorithms, machine learning models, or any other information used by cloud computing resource 126.
In some embodiments, machine learning system 200 includes a data retrieval module 210. Data retrieval module 210 can provide functionality related to acquiring and/or receiving input data for processing using machine learning algorithms and/or machine learning models. For example, data retrieval module 210 can interface with a client system (e.g., 102) or server system (e.g., 112) to receive data that will be processed, including establishing communication and managing transfer of data via one or more communication protocols.
In some embodiments, machine learning system 200 includes a data conditioning module 212. Data conditioning module 212 can provide functionality related to preparing input data for processing. For example, data conditioning can include making a plurality of images uniform in size (e.g., cropping, resizing), augmenting data (e.g., taking a single image and creating slightly different variations (e.g., by pixel rescaling, shear, zoom, rotating/flipping), extrapolating, feature engineering), adjusting image properties (e.g., contrast, sharpness), filtering data, or the like.
In some embodiments, machine learning system 200 includes a machine learning training module 214. Machine learning training module 214 can provide functionality related to training one or more machine learning algorithms, in order to create one or more trained machine learning models.
The concept of “machine learning” generally refers to the use of one or more electronic devices to perform one or more tasks without being explicitly programmed to perform such tasks. A machine learning algorithm can be “trained” to perform the one or more tasks (e.g., classify an input image into one or more classes, identify and classify features within an input image, predict a value based on input data) by applying the algorithm to a set of training data, in order to create a “machine learning model” (e.g., which can be applied to non-training data to perform the tasks). A “machine learning model” (also referred to herein as a “machine learning model artifact” or “machine learning artifact”) refers to an artifact that is created by the process of training a machine learning algorithm. The machine learning model can be a mathematical representation (e.g., a mathematical expression) to which an input can be applied to get an output. As referred to herein, “applying” a machine learning model can refer to using the machine learning model to process input data (e.g., performing mathematical computations using the input data) to obtain some output.
Training of a machine learning algorithm can be either “supervised” or “unsupervised”. Generally speaking, a supervised machine learning algorithm builds a machine learning model by processing training data that includes both input data and desired outputs (e.g., for each input data, the correct answer (also referred to as the “target” or “target attribute”) to the processing task that the machine learning model is to perform). Supervised training is useful for developing a model that will be used to make predictions based on input data. An unsupervised machine learning algorithm builds a machine learning model by processing training data that only includes input data (no outputs). Unsupervised training is useful for determining structure within input data.
A machine learning algorithm can be implemented using a variety of techniques, including the use of one or more of an artificial neural network, a deep neural network, a convolutional neural network, a multilayer perceptron, and the like.
Referring again to
In some examples, machine learning system 200 includes machine learning model output module 220. Machine learning model output module 220 can provide functionality related to outputting a machine learning model, for example, based on the processing of training data. Outputting a machine learning model can include transmitting a machine learning model to one or more remote devices. For example, a machine learning system 200 implemented on electronic devices of cloud computing resource 126 can transmit a machine learning model to cloud computing system 112, for use in processing patient data sent between client system 102 and system 112.
In some examples, memory 306 includes one or more computer-readable mediums that store (e.g., tangibly embodies) one or more computer programs (e.g., including computer executable instructions) and/or data for performing techniques described herein in accordance with some examples. In some examples, the computer-readable medium of memory 306 is a non-transitory computer-readable medium. At least some values based on the results of the techniques described herein can be saved into memory, such as memory 306, for subsequent use. In some examples, a computer program is downloaded into memory 306 as a software application. In some examples, one or more processors 304 include one or more application-specific chipsets for carrying out the above-described techniques.
At block 402, a computing system (e.g., client system 102, cloud computing system 112, and/or cloud computing resource 126) receives a data set (e.g., via data retrieval module 210) including anonymized electronic health records related to asthma and/or COPD from an external source (e.g., database 120 or database 134). In some examples, the external source is a commercially available database. In other examples, the external source is a private Key Opinion Leader (“KOL”) database. The data set includes anonymized electronic health records for a plurality of patients diagnosed with asthma and/or COPD. In some examples, the data set includes anonymized electronic health records for millions of patients diagnosed with asthma and/or COPD. The electronic health records include a plurality of data inputs for each of the plurality of patients. The plurality of data inputs represent patient features, physiological measurements, and other information relevant to diagnosing asthma and/or COPD. The electronic health records further include a diagnosis of asthma and/or COPD for each of the plurality of patients. In some examples, the computing system receives more than one data set including anonymized electronic health records related to asthma and/or COPD from various sources (e.g., receiving a data set from a commercially available database and another data set from a KOL database). In these examples, block 402 further includes the computing system combining the received data sets into a single combined data set.
In some examples, the data set received at block 402 includes more data inputs than those included in exemplary data set 500 for one or more patients of the plurality of patients. Some examples of additional data inputs include (but are not limited to) a patient body mass index (BMI), FEV1/FVC ratio, median FEV1/FVC ratio (e.g., if a patient's FEV1 and FVC has been measured more than once), wheeze status (e.g., coarse, bilateral, slight, prolonged, etc.), wheeze status change (e.g., increased, decreased, etc.), cough type (e.g., regular cough, productive cough, etc.), dyspnea type (e.g., paroxysmal nocturnal dyspnea, trepopnea, platypnea, etc.), dyspnea status change (e.g., improved, worsened, etc.), chronic rhinitis count (e.g., number of positive diagnoses), allergic rhinitis count (e.g., number of positive diagnoses), gastroesophageal reflux disease count (e.g., number of positive diagnoses), location data (e.g., barometric pressure and average allergen count of patient residence), and sleep data (e.g., average hours of sleep per night). Additionally, in some examples, the data set includes image data for one or more patients of the plurality of patients included in the data set (e.g., chest radiographs/x-ray images). In some examples, the data set received at block 402 includes less data inputs than those included in exemplary data set 500 for one or more patients of the plurality of patients.
Returning to
In some examples, aligning units of measurement for data input values included in the data set at block 404B includes converting all data input values to corresponding metric values (where applicable). For example, converting data input values to corresponding metric values includes converting all data input values for patient height in the data set to centimeters (cm) and/or converting all data input values for patient weight in the data set to kilograms (kg).
In some examples, block 404 does not include one of block 404A and block 404B. For example, block 404 does not include block 404A if there is no repeated, nonsensical, or unnecessary data in the data set received at block 402. In some examples, block 404 does not include block 404B if all of the units of measurement for data input values included in the data set received at block 402 are already aligned (e.g., already in metric units).
The computing system also entirely removed Patient 19 (and all of Patient 19's corresponding data inputs) from exemplary data set 500. In this example, the computing system entirely removed Patient 19 from exemplary data set 500 because the computing system determined that Patient 19 was a duplicate of Patient 2 (e.g., all of the data inputs for Patient 19 and Patient 2 were identical and thus Patient 19 was a repeat of Patient 2). Lastly, the computing system aligned the units for the patient weight data input of Patient 2 as well as the patient height data inputs of Patient 11 and Patient 12. Specifically, the computing system converted the values/units for the patient weight data input of Patient 2 from 220 pounds (lb) to 100 kilograms (kg) and the values/units for the patient height data inputs of Patient 11 and Patient 12 from 5.5 feet (ft) and 5.8 ft to 170 centimeters (cm) and 177 cm, respectively.
Returning to
Feature-engineering the pre-processed data set at block 406 further includes the computing system calculating, at block 406B, chi-square statistics corresponding to one or more categorical data inputs for each of the plurality of patients included in the data set and Analysis of Variance (ANOVA) F-test statistics corresponding to one or more non-categorical data inputs for each of the plurality of patients included in the data set. Categorical data inputs include data inputs having non-numerical data input values. Some examples of non-numerical data input values include (but are not limited to) “tight chest” or “chest pressure” for a patient chest label data input and “intermittent,” “mild,” “occasional,” or “no descriptor” for a patient cough status data input. Non-categorical data inputs include data inputs having numerical data input values.
The computing system utilizes chi-square and ANOVA F-test statistics to measure variance between the values of one or more data inputs included in the data set in relation to asthma or COPD diagnoses included in the data set (e.g., the “target attribute” of the data set). Accordingly, the computing system determines, based on the calculated chi-square and ANOVA F-test statistics, one or more data inputs that are most likely to be independent of class and therefore unhelpful and/or irrelevant for training machine learning algorithms using the data set to predict asthma and/or COPD diagnoses. In other words, the computing system determines one or more data inputs (of the data inputs included in the data set) that have high variance in relation to the asthma or COPD diagnoses included in the data set when compared with other data inputs included in the data set. In some examples, determining the one or more data inputs that are most likely to be independent of class further includes the computing system performing recursive feature elimination with cross-validation (RFECV) based on the data set (e.g., after calculating the chi-square and ANOVA F-test statistics). In some examples, block 406B further includes the computing system removing the one or more data inputs that the computing system determines are most likely to be independent of class for one or more patients of the plurality of patients included in the data set.
Feature-engineering the pre-processed data set at block 406 further includes the computing system one-hot encoding categorical data inputs for each of the plurality of patients included in the data set at block 406C. As described above, categorical data inputs include data inputs having non-numerical data input values. With respect to block 406C, categorical data inputs further include diagnoses of asthma or COPD included in the data set (as a diagnosis of asthma or COPD is a non-numerical value). One-hot encoding is a process by which categorical data input values are converted into a form that can be used to train machine learning algorithms and in some cases improve the predictive ability of a trained machine learning algorithm. Accordingly, one-hot encoding categorical data input values for each of the plurality of patients included in the data set includes converting each of the plurality of patients' non-numerical data input values and diagnosis of asthma or COPD into numerical values and/or binary values representing the non-numerical data input values and asthma or COPD diagnosis. For example, the non-numerical data input values “tight chest” and “chest pressure” for the patient chest label data input are converted to binary values 0 and 1, respectively. Similarly, an asthma diagnosis and a COPD diagnosis are converted to binary values 0 and 1, respectively.
As shown in
Lastly, as shown in
Returning to
In some examples, after generating a reduced-dimension representation of the data input values for each of the plurality of patients included in the data set (e.g., in the form of one or more coordinates), the computing system adds the reduced-dimension representation of the data input values to the data set as one or more new data inputs for each of the patients. For example, in the example above wherein the computing system generates a two-dimensional representation of the data input values for each patient included in the data set in the form of two-dimensional coordinates, the computing system subsequently adds a new data input for each coordinate of the two dimensional coordinates for each patient of the plurality of patients.
Further, after applying the UMAP algorithm to the data set, the computing system generates a UMAP model (e.g., a machine learning model artifact) representing the non-linear reduction of the feature-engineered data set's number of dimensions (e.g., via machine learning model output module 220). Then, as will be described in greater detail below, if the computing system applies the generated UMAP model to, for example, a set of patient data including a plurality of data inputs corresponding to a patient not included in the feature-engineered data set, the computing system determines (based on the application of the UMAP model) a reduced-dimension representation of the data input values for the patient not included in the data set. Specifically, the computing system determines the reduced-dimension representation of the data input values for the patient not included in the feature-engineered data set by non-linearly reducing the set of patient data in the same manner that the computing system reduced the feature-engineered data set's number of dimensions.
After generating a reduced-dimension representation of the data input values for each of the plurality of patients included in the feature-engineered data set (e.g., in the form of one or more coordinates), the computing system applies a Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) unsupervised machine learning algorithm to the reduced-dimension representations of the data input values. Applying an HDBSCAN algorithm to the reduced-dimension representation of the data set clusters one or more patients of the plurality of patients included in the data set into one or more clusters (such as groups) of patients based on the reduced-dimension representation of the one or more patients' data input values and one or more threshold similarity/correlation requirements (discussed in greater detail below). Each generated cluster of patients of the one or more generated clusters of patients includes two or more patients having similar/correlated reduced-dimension representations of their data input values (e.g., similar/correlated coordinates). The one or more patients that are clustered into one cluster of patients are referred to as “inliers” and/or “phenotypic hits.” In some examples, the computing system applies one or more other algorithms to the data set to cluster one or more patients of the plurality of patients included in the data set into one or more clusters of patients instead of applying the HDBSCAN algorithm mentioned above. Some examples of such algorithms include (but are not limited to) a K-Means clustering algorithm, a Mean-Shift clustering algorithm, and a Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm.
Note, in some examples, one or more patients of the plurality of patients included in the data set will not be clustered into a cluster of patients. The one or more patients that are not clustered into a cluster of patients are referred to as “outliers” and/or “phenotypic misses.” For example, the computing system will not cluster a patient into a cluster of patients if the computing system determines (based on the application of the HDBSCAN algorithm to the reduced-dimension representation of the data set) that reduced-dimension representation of the patient's data input values do not meet one or more threshold similarity/correlation requirements.
In some examples, the one or more threshold similarity/correlation requirements include a requirement that each coordinate of a reduced-dimension representation of a patient's data input values (e.g., x, y, and z coordinates for a three-dimensional representation) be within a certain numerical range in order to be clustered into a cluster of patients. In some examples, the one or more threshold similarity/correlation requirements include a requirement that at least one coordinate of a reduced-dimension representation of a patient's data input values be within a certain proximity to a corresponding coordinate of reduced-dimension representations of one or more other patients' data input values. In some examples, the one or more threshold similarity/correlation requirements include a requirement that all coordinates of a reduced-dimension representation of a patient's data input values be within a certain proximity to corresponding coordinates for reduced-dimension representations of a minimum number of other patients included in the data set. In some examples, the one or more threshold similarity/correlation requirements include a requirement that all coordinates of a reduced-dimension representation of a patient's data input values be within a certain proximity to a cluster centroid (e.g., a center point of a cluster). In these examples, the computing system determines a cluster centroid for each of the one or more clusters that the computing system generates based on the application of the HDBSCAN algorithm to the data set.
In some examples, the one or more threshold similarity/correlation requirements are predetermined. In some examples, the computing system generates the one or more threshold similarity/correlation requirements based on the application of the HDBSCAN algorithm to the reduced-dimension representation of the data set or the data set itself.
After applying the HDBSCAN algorithm to the reduced-dimension representations of the data input values for each of the plurality of patients included in the data set, the computing system generates (e.g., via machine learning model output module 220) an HDBSCAN model representing a cluster structure of the data set (e.g., a machine learning model artifact representing the one or more generated clusters and relative positions of inliers and outliers included in the data set). Then, as will be described in greater detail below, if the computing system applies the generated HDBSCAN model to, for example, a reduced-dimension representation of data input values included in a set of patient data for a patient not include in the data set, the computing system determines (based on the application of the HDBSCAN model) whether the patient falls within one of the one or more generated clusters corresponding to the plurality of patients included in the data set. In other words, the computing device determines, based on the application of the HDBSCAN model to the reduced-dimension representation of data input values for the patient, whether each of the patients is an inlier/phenotypic hit or outlier/phenotypic miss with respect to the one or more generated clusters corresponding to the plurality of patients included in the data set.
In some examples, at step 408, the computing system applies one or more Gaussian mixture model algorithms to the feature-engineered data set instead of the UMAP and HDBSCAN algorithms. A Gaussian mixture model algorithm, like the UMAP and HDBSCAN algorithms, is an unsupervised machine learning algorithm. Further, similar to applying UMAP and HDBSCAN algorithms to the feature-engineered data set, applying one or more Gaussian mixture model algorithms to the data set allows the computing system to classify patients included in the data set as inliers or outliers. Specifically, the computing system determines a covering manifold (e.g., a surface manifold) for the data set based on the application of the one or more Gaussian mixture model algorithms to the data set. Then, the computing system determines whether a patient is an inlier or an outlier based on whether the patient falls within the covering manifold (e.g., a patient is an inlier if the patient falls within the covering manifold). However, the Gaussian mixture model algorithms provide an additional benefit in that their rejection probability is tunable, which in turn allows the computing system to adjust the probability that a patient included in the data set will fall within the covering manifold and thus the probability that a patient will be classified as an outlier.
In some examples, at step 408, the computing system stratifies the feature-engineered data set based on a specific data input included in the data set (e.g., gender, smoking status, FEV1, FEV1/FVC ratio, BMI, number of symptoms, or weight) and then applies a separate Gaussian mixture model algorithm to each stratified subset of the data set. For example, if the computing system stratifies the data set based on gender, the computing system will subsequently apply one Gaussian mixture model algorithm only to male patients included in the data set and apply another Gaussian mixture model algorithm only to female patients included in the data set. In addition to classifying patients included in the stratified subsets as inliers or outliers, stratifying the data set as described above allows the computing system to account for data input values that are dependent upon other data input values included in the feature-engineered data set. For example, because FEV1 and FEV1/FVC ratio values are highly dependent upon gender (e.g., a normal FEV1 measurement for women would be abnormal for men), applying separate Gaussian mixture model algorithms to a subset of female patients and a subset of male patients allows the computing system to account for the FEV1 and FEV1/FVC ratio dependencies when classifying patients as inliers or outliers (e.g., when applying the trained Gaussian mixture model to patient data). This in turn improves the computing system's classification of patients as inliers or outliers (e.g., increased classification accuracy and specificity).
For example,
At block 410, the computing system generates (e.g., via data conditioning module 212) an inlier data set by removing the outliers/phenotypic misses (e.g., the one or more patients of the plurality of patients included in the data set that are not clustered into a cluster of patients) from the data set. Specifically, the computing system entirely removes the outliers/phenotypic misses (and all of their corresponding data inputs) from the data set such that the only patients remaining in the data set are the patients that the computing system clustered into one of the one or more clusters of patients generated at block 408 (e.g., the inliers/phenotypic hits).
For example, as shown in
Returning to
After training the supervised machine learning algorithm, the computing system generates a supervised machine learning model (e.g., a machine learning model artifact). Generating the supervised machine learning model includes the computing system determining, based on the training of the one or more supervised machine learning algorithms, one or more patterns that map the data inputs of the patients included in the inlier data set to the patients' corresponding asthma/COPD diagnoses (e.g., the target attribute). Thereafter, the computing system generates the supervised machine learning model representing the one or more patterns (e.g., a machine learning model artifact representing the one or more patterns). As will be discussed in greater detail below, the computing system uses the generated supervised machine learning model to predict an asthma and/or COPD diagnosis when provided with data similar to the inlier data set (e.g., patient data including a plurality of data inputs).
In the examples where the inlier data set is divided into an inlier training set and an inlier validation set, generating the supervised machine learning model further includes the computing system validating the supervised machine learning model (generated by applying the supervised machine learning algorithm to the inlier training set) using the inlier validation set. Validating a supervised machine learning model assess the supervised machine learning model's ability to accurately predict a target attribute when provided with data similar to the data used to train the supervised machine learning algorithm that generated the supervised machine learning model. In these examples, the computing system validates the supervised machine learning model to assess the supervised machine learning model's ability to accurately predict an asthma and/or COPD diagnosis when applied to patient data that is similar to the inlier data set used during the training process described above (e.g., patient data including a plurality of data inputs).
There are various types of supervised machine learning model validation methods. Some examples of the types of validation include k-fold cross validation, stratified k-fold cross validation, leave-p-out cross validation, or the like. In some examples, the computing system uses one type of validation to validate the supervised machine learning model (generated by applying the supervised machine learning algorithm to the inlier training set). In other examples, the computing system uses more than one type of validation to validate the supervised machine learning model. Further, in some examples, the number of patients in the inlier training set, the number of patients in the inlier validation set, the number of times the supervised machine learning algorithm is trained, and/or the number of times the supervised machine learning model is validated, are based on the type(s) of validation the computing system uses during the validation process.
Validating the supervised machine learning model includes the computing system removing the asthma/COPD diagnosis for each patient included in the inlier validation set, as that is the target attribute that the supervised machine learning model predicts. After removing the asthma/COPD diagnosis for each patient included in the inlier validation set, the computing system applies the supervised machine learning model to the data input values of the patients included in the inlier validation set, such that the supervised machine learning model determines an asthma and/or COPD diagnosis prediction for each of the patients based on each of the patient's data input values. After, the computing system evaluates the supervised machine learning model's ability to predict an asthma and/or COPD diagnosis, which includes the computing system comparing the patients' determined asthma and/or COPD diagnosis predictions to the patients' true asthma/COPD diagnoses (e.g., the diagnoses that were removed from the inlier validation set). In some examples, the computing system's method for evaluating the supervised machine learning model's ability to predict an asthma and/or COPD diagnosis is based on the type(s) of validation used during the validation process.
In some examples, evaluating the supervised machine learning model's ability to predict an asthma and/or COPD diagnosis includes the computing system determining one or more classification performance metrics representing the predictive ability of the supervised machine learning models. Some examples of the one or more classification performance metrics include an F1 score (also known as an F-score or F-measure), a Receiver Operating Characteristic (ROC) curve, an Area Under Curve (AUC) metric (e.g., a metric based on an area under an ROC curve), a log-loss metric, an accuracy metric, a precision metric, a specificity metric, and a recall metric (also known as a sensitivity metric). In some examples, the computing system iteratively performs the above training and validation processes (e.g., using the inlier training set and inlier validation set, or variations thereof) until the one or more determined classification performance metric satisfies one or more corresponding predetermined classification performance metric thresholds. In these examples, the supervised machine learning model generated by the computing system is the supervised machine learning model associated with one or more classification performance metrics that each satisfy the one or more corresponding predetermined classification performance metric thresholds.
In some examples, validating the supervised machine learning model further includes the computing system tuning/optimizing hyperparameters for the supervised machine learning model (e.g., using techniques specific to the specific supervised machine learning algorithm used to generate the supervised machine learning model). Tuning/optimizing a supervised machine learning model's hyperparameters (also referred to as “deep optimization”), as opposed to maintaining a supervised machine learning model's default hyperparameters (also referred to as “basic optimization”), optimizes the supervised machine learning model's performance and thus improves its ability to make accurate predictions (e.g., improves the model's performance metrics, such as the model's accuracy, sensitivity, etc.).
For example, Table (1) below includes asthma and/or COPD prediction results (e.g., percent of true labels/diagnoses correctly predicted) based on the application of the supervised machine learning model to a test set of patient data when the hyperparameters for the supervised machine learning model were not tuned/optimized during the validation of the model (i.e., basic optimization). On the other hand, Table (2) below includes asthma and/or COPD prediction results (e.g., percent of true labels/diagnoses correctly predicted) based on the application of the supervised machine learning model to the same test set of patient data when the hyperparameters for the supervised machine learning model were tuned/optimized during the validation of the model (i.e., deep optimization). As shown, while the basic optimization supervised machine learning model predicted asthma, COPD, and asthma and COPD (“ACO”) with fairly high accuracy and sensitivity, the accuracy and sensitivity of the deep optimization supervised machine learning model was even higher.
In some examples, after validating the supervised machine learning model (and, in some examples, after determining one or more performance metrics corresponding to the supervised machine learning model), the computing system performs feature selection based on the data inputs included in the inlier data set to narrow down the most important data inputs with respect to predicting asthma and/or COPD (e.g., the data inputs that have the greatest impact on the supervised machine learning model's diagnosis predictions). Specifically, the computing system determines the importance of the data inputs included in the inlier data set using one or more feature selection techniques such as recursive feature elimination, Pearson correlation filtering, chi-squared filtering, Lasso regression, and/or tree-based selection (e.g., Random Forest). For example, after performing feature selection for the basic optimization and deep optimization supervised machine learning models discussed above with reference to Table (1) and Table (2), the computing system determined that the most important data inputs included in the inlier data set used to train the two supervised machine learning models were FEV1/FVC ratio, FEV1, cigarette packs smoked per year, patient age, dyspnea incidence, whether the patient is a current smoker, patient BMI, whether the patient is diagnosed with allergic rhinitis, wheeze incidence, cough incidence, whether the patient is diagnosed with chronic rhinitis, and if the patient has never smoked before. In some examples, after the computing system determines the most important data inputs via feature selection, the computing system retrains and revalidates the supervised machine learning model using a reduced inlier training data set and a reduced inlier validation set that only includes values for the data inputs that were determined to be most important. In this manner, the computing system generates a supervised machine learning model that can accurately predict asthma and/or COPD diagnoses based on a reduced number of data inputs. This in turn increases the speed at which the supervised machine learning algorithm can make accurate predictions, as there is less data (i.e., less data input values) that the supervised machine learning algorithm needs to process when determining its diagnosis predictions.
Generating an inlier data set (e.g., in accordance with the processes of block 408) and subsequently generating a supervised machine learning model based on the application of a supervised machine learning algorithm to the inlier data set provides several advantages over simply generating a supervised machine learning model by applying a supervised machine learning algorithm to a larger data set that includes inliers/phenotypic hits and outliers/phenotypic misses. For example, because the inlier data set only includes patients having similar/correlated data input values, the computing system is able to generate a supervised machine learning model that predicts an asthma and/or COPD diagnosis with very high accuracy when applied to a patient having similar/correlated data input values to those of the inlier patients.
For example,
At block 414, the computing system generates a supervised machine learning model (e.g., via machine learning model output module 220) by applying a supervised machine learning algorithm (e.g., included in machine learning algorithms 216) to the feature-engineered data set generated at block 406 (e.g., via machine learning training module 214). Block 414 is identical to block 412 except that the computing system applies a supervised machine learning algorithm to a different data set at each block. For example, at block 412, the computing system applies a supervised machine learning algorithm to an inlier data set (generated by the application of one or more unsupervised machine learning algorithms to the feature-engineered data set generated at block 406) whereas at block 414, the computing system applies the same supervised machine learning algorithm directly to a feature-engineered data set after the feature-engineered data set is generated at block 406. In some examples, the computing system uses a different supervised machine learning algorithm at block 412 and block 414. For example, the computing system applies a first supervised machine learning algorithm to the inlier data set at block 412 and a second supervised machine learning algorithm to the feature-engineered data set at block 414.
At block 902, a computing system (e.g., client system 102, cloud computing system 112, and/or cloud computing resource 126) receives a first historical set of patient data (e.g., exemplary data set 500) (e.g., as described above with reference to block 402 of
At block 904, the computing system pre-processes the first historical set of patient data received at block 902 (e.g., as described above with reference to block 404 of
At block 908, the computing system applies one or more unsupervised machine learning algorithms to the feature-engineered first historical set of patient data (e.g., as described above with reference to block 408 of
At block 910, the computing system generates a set of one or more data-correlation criteria based on the application of the one or more unsupervised machine learning algorithms (e.g., a UMAP algorithm, HDBSCAN algorithm, and/or Gaussian mixture model algorithm) to the feature-engineered first historical set of patient data. In some examples, at block 910, the computing system generates a set of one or more data-correlation criteria based on the application of the one or more unsupervised machine learning algorithms to one or more stratified subsets of the feature-engineered first historical set of patient data.
In some examples, the set of one or more data-correlation criteria include one or more unsupervised machine learning models (e.g., one or more unsupervised machine learning model artifacts (e.g., e.g., a UMAP model, HDBSCAN model, and/or Gaussian mixture model)) generated by the computing system based on the application of the one or more unsupervised machine learning algorithms to the feature-engineered first historical set of patient data or to one or more stratified subsets of the feature-engineered first historical set of patient data (e.g., as described above with reference to block 408 of
At block 912, the computing system generates a second historical set of patient data (e.g., exemplary data set 800). The second historical set of patient data includes data from a second plurality of patients having one or more phenotypic differences regarding patient features and/or one or more respiratory conditions. In some examples, the phenotypic differences include data regarding one or more respiratory conditions. In some examples, the data regarding one or more respiratory conditions includes a true diagnosis of asthma, COPD, both asthma and COPD, or neither asthma nor COPD. In these examples, a true diagnosis is a diagnosis that has been confirmed by one or more physicians and/or research scientists. In some examples, the second historical set of patient data is a sub-set of the first historical set of patient data that includes data from one or more patients of the first plurality of patients included in the first historical set of patient data that satisfy the set of one or more data-correlation criteria generated at block 910.
At block 914, the computing system generates a first diagnostic model by applying one or more supervised machine learning algorithms to the second historical set of patient data generated at block 912 (e.g., as described above with reference to block 412 of
At block 916, the computing system generates a second diagnostic model by applying one or more supervised machine learning algorithms to a third historical set of patient data. The third historical set of patient data includes data from a third plurality of patients having one or more phenotypic differences regarding patient features and/or one or more respiratory conditions. In some examples, the phenotypic differences include data regarding one or more respiratory conditions. In some examples, the data regarding one or more respiratory conditions includes a true diagnosis of asthma, COPD, both asthma and COPD, or neither asthma nor COPD. In these examples, a true diagnosis is a diagnosis that has been confirmed by one or more physicians and/or research scientists. In some examples, the third historical set of patient data and the first historical set of patient data are the same historical set of patient data (e.g., exemplary data set 500). In some examples, the second historical set of patient data generated at block 912 is a sub-set of the third historical set of patient data. In these examples, the second historical set of patient data includes data from one or more patients of the third plurality of patients included in the third historical set of patient data that satisfy the set of one or more data-correlation criteria generated at block 910. As will be discussed in greater detail below, the computing system applies the first diagnostic model generated at block 914 and/or the second diagnostic model generated at block 916 to a patient's data to predict an asthma and/or COPD diagnosis for the patient.
At block 1002, a computing system (e.g., client system 102, cloud computing system 112, and/or cloud computing resource 126) receives, via one or more input elements (e.g., human input device 312 and/or network interface 310), a set of patient data corresponding to a patient. The set of patient data includes a plurality of data inputs representing the patient's features, physiological measurements, and/or other information relevant to diagnosing asthma and/or COPD. In some examples, the data inputs representing the patient's physiological measurements includes results of at least one physiological test administered to the patient (e.g., a lung function test, an exhaled nitric oxide test (such as a FeNO test), or the like self-administered by the patient, or administered by a physician, clinician, or other individual). Further, in some examples, the computing system receives (e.g., via network interface 310) one or more of the data inputs representing the patient's physiological measurements from one or more physiological test devices over a network (e.g., network 106). Some examples of such physiological test devices include (but are not limited to) a spirometry device, a FeNO device, and a chest radiography (x-ray) device.
In some examples, the set of patient data received at block 1002 includes more data inputs than those shown in exemplary set of patient data 1102 and exemplary set of patient data 1104 of
Returning to
At block 1006, in accordance with a determination that the set of patient data received at block 1002 does not include sufficient data, the computing system forgoes differentially diagnosing asthma and COPD in the patient.
At block 1008, in accordance with a determination that the set of patient data received at block 1002 does include sufficient data, the computing device pre-processes the set of patient data. As shown in
In some examples, block 1008 does not include one of block 1008A and block 1008B. For example, block 1008 does not include block 808A if there is no repeated, nonsensical, or unnecessary data in the data set received at block 1002. In some examples, block 1008 does not include block 1008B if all of the units of measurement for data input values included in the set of patient data received at block 1002 are already aligned (e.g., already in metric units).
Further, the computing system removed the patient EOS count data input from exemplary set of patient data 1102 and exemplary set of patient data 1104 because based on chi-square statistics previously calculated by the computing system, EOS count is likely to be independent of class and therefore unhelpful for differentially diagnosis asthma and COPD. The pre-processing in this example did not include the computing system aligning units of measurement because the units of measurement of exemplary set of patient data 1102 and exemplary set of patient data 1104 example were already aligned (e.g., patient height data input values were already in cm, patient weight data input values were already in kg, etc.).
Returning to
Feature-engineering the pre-processed set of patient data at block 1010 further includes the computing system onehot encoding categorical data inputs (e.g., data inputs having non-numerical values) included in the set of patient data at block 1010B. Onehot encoding categorical data inputs included in the set of patient data includes converting each of the non-numerical data input values in the set of patient data into numerical values and/or binary values representing the non-numerical data input values. For example, converting non-numerical data input values into binary values includes the computing system converting non-numerical data input values “tight chest” and “chest pressure” for the patient chest label data input into binary values 0 and 1, respectively.
As shown in
Returning to
In some examples, after generating a reduced-dimension representation of the patient's data input values (e.g., in the form of one or more coordinates), the computing system adds the reduced-dimension representation to the set of patient data as one or more new data inputs. For example, in the example above wherein the computing system generates a two-dimensional representation of the patient's data input values in the form of two-dimensional coordinates, the computing system subsequently adds a new data input for each coordinate of the two-dimensional coordinates to the set of patient data.
After generating a reduced-dimension representation of the patient's data input values using the UMAP model, the computing system applies an HDBSCAN model to the reduced-dimension representation of the set of patient data (e.g., generated via the application of the UMAP model to the set of patient data). The HDBSCAN model is generated by the computing system's application of an HDBSCAN algorithm to the reduced-dimension representation of the training data set discussed above with respect to the UMAP model (e.g., as described above with reference to block 408 of
In some examples, the patient is not clustered into one of the one or more previously-generated clusters of patients. A patient that is not clustered into a cluster of the one or more previously-generated clusters of patients is referred to as an “outlier” and/or a “phenotypic miss.” For example, the computing system will not cluster a patient into a cluster of the one or more previously-generated clusters of patients if the computing system determines (based on the application of the HDBSCAN model to the reduced-dimension representation of the set of patient data) that the reduced-dimension representation of the patient's data input values do not satisfy one or more threshold similarity/correlation requirements.
In some examples, the one or more threshold similarity/correlation requirements include a requirement that each coordinate of the reduced-dimension representation of the patient's data input values (e.g., x, y, and z coordinates for a three-dimensional representation) be within a certain numerical range in order to be clustered into one of the one or more previously-generated clusters of patients. In these examples, the certain numerical range is based on the reduced-dimension representation coordinates of the patients clustered in the one or more previously-generated clusters. In some examples, the one or more threshold similarity/correlation requirements include a requirement that at least one coordinate of the reduced-dimension representation of the patient's data input values be within a certain proximity to a corresponding coordinate of a reduced-dimension representation of the data input values for one or more patients in at least one of the one or more previously-generated clusters of patients. In some examples, the one or more threshold similarity/correlation requirements include a requirement that all coordinates of a reduced-dimension representation of the patient's data input values be within a certain proximity to corresponding coordinates of reduced-dimension representations of a minimum number of patients in at least one of the one or more previously-generated clusters of patients. In some examples, the one or more threshold similarity/correlation requirements include a requirement that all coordinates of a reduced-dimension representation of a patient's data input values be within a certain proximity to a cluster centroid (e.g., a center point of a cluster). In these examples, the computing system determines a cluster centroid for each of the one or more previously-generated clusters that the computing system generates based on the application of the HDBSCAN algorithm to the reduced-dimension representation of the training data set of patients described above.
As shown in
Returning to
In some examples, the computing system's application of a Gaussian mixture model to the feature-engineered set of patient data groups the patient into a covering manifold previously generated by the computing system's application of the Gaussian mixture model algorithm to the training data set of patients (or a stratified subset of the training data set of patients). If the patient is grouped within the previously-generated covering manifold, the patient is referred to as an “inlier” and/or a “phenotypic hit.” In some examples, the patient is not grouped into the previously-generated covering manifold. A patient that is not grouped into the previously-generated covering manifold is referred to as an “outlier” and/or a “phenotypic miss.”
At block 1014, in accordance with a determination that the patient is an inlier/phenotypic hit, the computing system determines a first predicted asthma and/or COPD diagnosis by applying a first supervised machine learning model to the set of patient data. The first supervised machine learning model is a supervised machine learning model generated by the computing system's application of a supervised machine learning algorithm to a training data set of inlier patients (e.g., as described above with reference to block 412 of
At block 1016, the computing system outputs the first predicted asthma and/or COPD diagnosis. For example, the first predicted asthma and/or COPD diagnosis is output by display device 314 of
At block 1018, in accordance with a determination that the patient is an outlier/phenotypic miss, the computing system determines a second predicted asthma and/or COPD diagnosis by applying a second supervised machine learning model to the set of patient data. The second supervised machine learning model is a supervised machine learning model generated by the computing system's application of a supervised machine learning algorithm to a feature-engineered training data set of patients (e.g., as described above with reference to block 414 of
At block 1020, the computing system outputs the second predicted asthma and/or COPD diagnosis. For example, the first predicted asthma and/or COPD diagnosis is output by display device 314 of
In some examples, the computing system determines a confidence score corresponding to a predicted asthma and/or COPD diagnosis. For example, the computing system determines a confidence score based on the application of a first supervised machine learning model to a set of patient data (as described above with reference to block 1014). In some examples, the computing system determines a confidence score based on the application of a second supervised machine learning model to a set of patient data (as described above with reference to block 1016). In some examples, the computing system outputs a confidence score with a predicted asthma and/or COPD diagnosis. For example, the computing system outputs a confidence score corresponding to the first predicted asthma and/or COPD diagnosis at block 1016 and/or outputs a confidence score corresponding to the second predicted asthma and/or COPD diagnosis at block 1020.
In some examples, a confidence score represents a predictive probability that a predicted asthma and/or COPD diagnosis is correct (e.g., that the patient truly has the predicted respiratory condition(s)). In some examples, determining the predictive probability includes the computing system determining a logit function (e.g., log-odds) corresponding to the predicted asthma and/or COPD diagnosis and subsequently determining the predictive probability based on an inverse of the logit function (e.g., based on an inverse-logit transformation of the log-odds). This predictive probability determination varies based on the data used to train a supervised machine learning model. For example, a supervised machine learning model trained using similar/correlated data (e.g., the first supervised machine learning model) will generate classifications (e.g., predictions) having higher predictive probabilities than a supervised machine learning model trained with dissimilar/uncorrelated data (e.g., the second supervised machine learning model) due in part to uncertainty and variation introduced into the model by the dissimilar/uncorrelated data. In some examples, the computing system determines the predictive probability based on one or more other logistic regression-based methods.
In some examples, in addition to outputting the confidence scores, the computing system outputs (e.g., displays on a display) a visual breakdown of one or more confidence scores that the computing system outputs (e.g., a visual breakdown for each confidence score). A visual breakdown of a confidence score represents how the computing system generated the confidence score by showing the most impactful data input values with respect to the computing system's determination of a corresponding predicted asthma and/or COPD diagnosis (e.g., showing how those data input values push towards or away from the predicted diagnosis). For example, the visual breakdown can be a bar graph that includes a bar for one or more data input values included in the patient data (e.g., the most impactful data input values), with the length or height of each bar representing the relative importance and/or impact that each data input value had in the determination of the predicted diagnosis (e.g., the longer a data input's bar is, the more impact that data input value had on the predicted diagnosis determination).
Further, as shown in
At block 1202, a computing system (e.g., client system 102, cloud computing system 112, and/or cloud computing resource 126) receives a set of patient data corresponding to a first patient (e.g., as described above with reference to block 1002 of
At block 1204, the computing system determines whether the set of patient data corresponding to the first patient satisfies a set of one or more data-correlation criteria (e.g., as described above with reference to block 1012 of
In some examples, the set of one or more data-correlation criteria include one or more unsupervised machine learning models (e.g., one or more unsupervised machine learning model artifacts (e.g., a UMAP model, HDBSCAN model, and/or Gaussian mixture model)) generated by the computing system based on the application of the one or more unsupervised machine learning algorithms to the first historical set of patient data or to a stratified subset of the first historical set of patient data (e.g., as described above with reference to block 408 of
In some examples, the set of one or more data-correlation criteria includes a requirement that a patient fall within in a cluster of one or more clusters of patients generated by applying the one or more unsupervised machine learning algorithms to the first historical set of patient data (e.g., as described above with reference to block 408 of
In other examples, the set of one or more data-correlation criteria includes a requirement that a patient fall within a covering manifold of patients generated by applying the one or more unsupervised machine learning algorithms to the feature-engineered first historical set of patient data (or to a stratified subset of the feature-engineered first historical set of patient data (e.g., stratified based on gender, smoking status, FEV1, FEV1/FVC ratio, BMI, number of symptoms, or weight)). In these examples, determining whether the set of patient data satisfies the set of one or more data-correlation criteria includes determining whether the first patient falls within the covering manifold (e.g., the set of patient data corresponding to the first patient satisfies the set of one or more data-correlation criteria if the patient falls within the covering manifold).
At block 1206, in accordance with a determination that the set of patient data corresponding to the first patient satisfies the set of one or more data-correlation criteria, the computing system determines a first indication of whether the first patient has one or more respiratory conditions selected from a group consisting of asthma and COPD based on an application of a first diagnostic model to the set of patient data corresponding to the first patient (e.g., as described above with reference to block 1014 of
At block 1208, the computing system outputs the first indication of whether the first patient has one or more respiratory conditions selected from a group consisting of asthma and COPD (e.g., as described above with reference to block 1016 of
At block 1210, in accordance with a determination that the set of patient data corresponding to the first patient does not satisfy the set of one or more data-correlation criteria, the computing system determines a second indication of whether the first patient has one or more respiratory conditions selected from a group consisting of asthma and COPD based on an application of a second diagnostic model to the set of patient data corresponding to the first patient (e.g., as described above with reference to block 1018 of
At block 1212, the computing system outputs the second indication of whether the first patient has one or more respiratory conditions selected from a group consisting of asthma and COPD (e.g., as described above with reference to block 1020 of
Number | Date | Country | |
---|---|---|---|
62817210 | Mar 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17437336 | Sep 2021 | US |
Child | 18667421 | US |