Various embodiments described herein are directed generally to health care and/or artificial intelligence. More particularly, but not exclusively, various methods and apparatus disclosed herein relate to converting time series data, such as electrocardiogram (“ECG”) waveforms, into different forms suitable for application across machine learning models.
Modern artificial intelligence techniques such as deep learning have numerous applications. Image processing may be one of the most developed. Numerous machine learning models (e.g., convolutional neural networks) already exist for performing image processing on digital images to perform tasks such as segmentation, object recognition, handwriting recognition, facial recognition, etc. In their most basic and/or common forms, many of these models do not operate on time-series data, i.e., a sequence of inputs that each corresponds to a point in time. Consequently, they may be adaptable for use across a variety of domains.
Techniques exist for processing time-series data such as electrocardiogram (“ECG”) waveforms to perform tasks like clinical decision making, diagnosis, interpretation, reduction of medical errors, expedition of patient care, etc. However, artificial intelligence models that operate on time-series data, such as recurrent neural networks, are more complex than models that operate on one-off data such as feed-forward neural networks. This added complexity makes them more difficult to train, as well as less adaptable across domains.
The present disclosure is directed to methods and apparatus for converting time series data such as electrocardiogram (“ECG”) data into forms suitable for application across machine learning models. For example, in various embodiments, ECG data may be obtained from a subject using various numbers of leads, such as twelve leads, or another number of leads. This ECG data may then be converted using techniques described herein to a form that is suitable for application across a machine learning model such as a convolutional neural network (“CNN”). This form may preserve both temporal and spatial information contained in the ECG data. Put another way, the converted form may preserve both the rhythm and morphology of ECG waveforms.
For example, ECG data (e.g., waveform(s)) may be used to generate a two dimensional digital image that preserves both the rhythm and morphology of the ECG data. In some cases this two-dimensional image may include multiple layers or channels corresponding to, for instance, different colors. For example, one type of digital image may include, for each pixel, a red channel, a green channel, and a blue channel (“RGB”). Other combinations of channels are contemplated. As will be described in more detail herein, various spatial and/or temporal aspects of the ECG data may be encoded into these different channels of the two-dimensional data. The two-dimensional digital image may then be applied as input across a suitably-trained machine learning model (e.g., a CNN) to generate output indicative of a health condition of the subject from which the ECG data was obtained. Additionally, before the machine learning model is trained, similar two-dimensional data that is labeled by medical personnel may be used to train the machine learning model.
In some implementations, transfer learning may be employed to expedite the process of training the machine learning model by leveraging at least part of a preexisting, pre-trained machine learning model. Various pre-trained machine learning models are widely available for tasks such as image and/or object classification. For example, some preexisting models are trained to identify and/or classify, for instance, objects like different types of animals, furniture, tools, vehicles, etc. Others may be trained to perform tasks such as handwriting recognition. In some embodiments, these preexisting models—and particularly their “upstream” hidden layers that identify/classify relatively abstract visual features—may be leveraged to train additional downstream hidden layers that identify aspects of two-dimensional digital images generated from time-series data using techniques described herein.
Various techniques may be employed to convert time series data such as ECG data to two-dimensional image data. For example, in some embodiments, the ECG data may take the form of twelve waveforms that correspond to each of twelve-lead ECG data. Vectorcardiography (“VCG”) data may be generated from these ECG data, or from an intermediate form of these data. In addition to calculating VCG leads from the standard ECG lead system, VCG leads may also be recorded directly. Since in standard twelve-lead ECGs, four waveforms may be themselves derived from the other eight waveforms, in some embodiments, VCG leads are derived using only the eight waveforms. It may also be possible to derive VCGs from other ECG lead systems that may have fewer or more than eight leads. In some embodiments, each of the twelve waveforms may be converted first into a respective single representative beat, e.g., an average or mean. The twelve representative beats may then be converted into three VCG beats, with each VCG beat corresponding to a heart vector in one dimension of three-dimensional (“3D”) space. Three VCG projections may then be determined, with each VCG projection representing a respective one of the three VCG beats on a spatial plane. These projections may be combined into a two-dimensional image, e.g., with each VCG projection being stored in a respective channel (e.g., RGB) of the two-dimensional image.
Generally, in one aspect, a method implemented using one or more processors may include: generating a two-dimensional image based on VCG data, wherein the VCG data is recorded directly or is based on ECG data measured from a subject; applying the two-dimensional image as input across a machine learning model to generate output, wherein the machine learning model is configured for use in processing two-dimensional images; and determining a health condition of the subject based on the output.
In various embodiments, the ECG data comprises multiple waveforms corresponding to multiple ECG leads. In various embodiments, the method further includes converting each of the multiple waveforms into a respective single representative beat. In various embodiments, the method further includes converting the multiple representative beats into three VCG beats, wherein each VCG beat corresponds to a heart vector in one dimension of 3D space. In various embodiments, the method further includes upsampling the three VCG beats.
In various embodiments, the method further includes determining three VCG projections, each VCG projection representing a respective one of the three VCG beats on a spatial plane corresponding to a respective dimension of the 3D space. In various embodiments, the method further includes encoding the three VCG projections into three corresponding layers of the two-dimensional image. In various embodiments, the three corresponding layers include red, green, and blue. In various embodiments, the ECG data include single lead data obtained from a wearable device worn by the subject.
In addition, some implementations include one or more processors of one or more computing devices, where the one or more processors are operable to execute instructions stored in associated memory, and where the instructions are configured to cause performance of any of the aforementioned methods. Some implementations also include one or more non-transitory computer readable storage media storing computer instructions executable by one or more processors to perform any of the aforementioned methods.
It should be appreciated that all combinations of the foregoing concepts and additional concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. In particular, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the inventive subject matter disclosed herein. It should also be appreciated that terminology explicitly employed herein that also may appear in any disclosure incorporated by reference should be accorded a meaning most consistent with the particular concepts disclosed herein.
In the drawings, like reference characters generally refer to the same parts throughout the different views. Also, the drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating various principles of the embodiments described herein.
Modern artificial intelligence (“AI”) techniques such as deep learning have numerous applications, and image processing may be one of the most developed. While relatively adaptable across domains, these deep learning models may not be configured to processing time-series data such as electrocardiogram (“ECG”) waveforms. Moreover, AI models that process time-series data are more complex, less readily available, and even when available, are not easily adapted for new domains. In view of the foregoing, various embodiments and implementations of the present disclosure are directed to converting time series data, such as ECG waveforms, into forms suitable for application across non-time-series machine learning models.
In
Techniques described herein are not limited to twelve-lead ECG data (or to ECG data at all, for that matter). For example, a second subject 1002 is being monitored by a wearable ECG device in the form of a smart watch 114. This smart watch 114 may provide ECG data of a lesser number of leads, e.g., one lead, to an intermediate computing device, such as a laptop 115 operated by second user 1002. Laptop 115 in turn may provide this ECG data to HIS 104 via network(s) 108 (or in some cases smart watch 114 may provide the ECG data directly to HIS 104 via network(s) 108).
A training system 120 and an inference system 124 may be implemented using any combination of hardware and software in order to create, manage, and/or apply machine learning model(s) stored in a machine learning (“ML”) model database 122. Training system 120 may be configured to apply training data such as two-dimensional images generated using techniques herein as input across one or more of the models in database 122 to generate output. The output generated using the training data may be compared to labels associated with the training data in order to determine error(s) associated with the model(s). A training example’s label may indicate, for instance, the presence and/or probability of a health condition in a subject from which the training example was generated. These error(s) may then be used, e.g., by training system 120, to train the model(s) using techniques such as back propagation and gradient descent (stochastic or otherwise).
Inference system 124 may be configured to use the trained machine learning model(s) in database 122 to infer health conditions of subjects based on two-dimensional imagery generated using techniques described herein. In some embodiments, training system 120 and/or inference system 124 may be implemented as part of a distributed computing system that is sometimes referred to as the “cloud,” although this is not required.
In some embodiments, the ability to make these inferences may be provided as part of a software application that aids doctor 112 with diagnosis, e.g., a clinical decision support (“CDS”) application. In some such embodiments, doctor 112 may rely on the inference as a “second opinion” to buttress or challenge their own medical opinion. Alternatively, the inferences may be used as ECG screening tests to separate, for instance, normal from abnormal ECG signals in a presumed healthy population (e.g., students, athletes, soldiers, etc.) so that further investigation can be performed on those subjects having abnormal ECG signals. Additionally or alternatively, techniques described herein may be incorporated into medical equipment that also incorporates ECG signals, such as exercise stress testing machines, defibrillators, cardiographs, bedside monitors, and so forth.
In some embodiments, subjects 1001-2 themselves may take advantage of disclosed techniques to determine likelihoods that they have health conditions, e.g., whether their heart beats are normal or abnormal. For example, second subject 1002 may operate laptop 115 in order to interface with, and receive health condition inferences from, inference system 124. In some embodiments, e.g., to preserve privacy and/or respect privacy laws and regulations, one or more trained machine learning model(s) may be distributed from database 122 to remote computing devices (e.g., laptop 115). That way, the remote computing devices can perform selected aspects of the present disclosure on locally-obtained time-series data (e.g., ECG data) without having to transport the ECG data to, for instance, the “cloud.”
In
In
In addition, in some embodiments, directions of the individual VCG projections may be preserved, as shown in the arrows depicted as part of each of the projections 248R-B (also visible in the two-dimensional image 250). These directions may be leveraged as additional features that are provided as inputs to a CNN to make inferences about medical conditions of subjects.
By implementing the conversions depicted in
For example, some machine learning models may already be trained, e.g., with millions of images, to classify images into thousands of object categories (such as keyboard, coffee mug, pencil, various animals, etc.). Such a network may have learned rich feature representations for a wide range of images. Accordingly, such a network can be commandeered and fine-tuned to, for instance, classify an ECG signal as “normal” or “abnormal.”
An example of this is depicted in
As shown in the middle of
The following describes one non-limiting example of how transfer learning may be applied to leverage a preexisting model to classify ECG data (converted into two-dimensional imagery as described herein) as, for instance, normal or abnormal, or to infer other health conditions. These other health conditions may include, for instance, atrial fibrillation, heart murmur, hypertrophy, etc.
First, a pre-trained neural network 560 may be obtained. As shown in
In some embodiments, the training options and/or parameters may be set such that the new layers 564 may learn much faster than the transferred layers 562A. This may be achieved, for instance, by setting the initial learning rate for the transferred layers 562A to a smaller value compared to the newly added last three layers 564. When performing transfer learning, there may not be a need to train for as many epochs (an epoch is a full training cycle on the entire training dataset). This is because most of the network (extracted layers 562A) is already trained using a much larger training dataset.
At 601, which includes blocks 602-610, the system may generate a two-dimensional image based on ECG data measured from a subject. In some embodiments, the operations of block 601 may be performed by inference system 124 or training system 120 depending on the circumstances. In other embodiments, they may be performed by a separate component, not depicted in
At block 602, the system may convert each of some number of waveforms of multi-lead ECG data into a respective single representative beat, as shown in
In some embodiments, at block 606, the system may upsample the (e.g., three) VCG beats converted at block 604, e.g., to improve spatial resolution. In other embodiments, this upsampling may be omitted. At block 608, the system may determine some number (e.g., three) of VCG projections based on the VCG beats. Each VCG projection may represent a respective one of the VCG beats on a spatial plane corresponding to a respective dimension of the 3D space. This was demonstrated in
At block 610, the system may encode the three VCG projections into three corresponding layers or channels of a two-dimensional image. For example, one VCG projection in one spatial plane may be encoded in a red channel of an RGB digital image, another may be encoded in a green channel of the RGB image, and another may be encoded in the blue channel of the RGB image.
After blocks 602-610, the system is ready to use the generated multi-layered two-dimensional image to either make an inference (if the model is already trained), or to train the model. For example, at block 612, the system, e.g., by way of inference system 124 or training system 120, may apply the multi-layered two-dimensional image as input across the machine learning model to generate output. At block 614 of
Alternatively, the machine learning may provide more than binary output. For example, in some embodiments, the machine learning model may generate, as output, a plurality of probabilities, each associated with a different health condition. In some such embodiments, the n (n≥1) health conditions have the highest probabilities may be presented to medical personnel, e.g., as part of graphical user interface, as natural language output through a speaker, as part of a report on the underlying subject, etc.
In other embodiments where the machine learning model is being trained, the output generated at block 612 may be compared to a label associated with the input data. For example, the input data may be ECG data that was previously labeled by medical personnel as being, for instance, “normal,” “abnormal,” or as exhibiting one or more health conditions. The difference, or error, between the output generated using the machine learning model and the label may then be used, e.g., by training system 120, to train the machine learning model.
User interface input devices 722 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computing device 710 or onto a communication network.
User interface output devices 720 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computing device 710 to the user or to another machine or computing device.
Storage subsystem 724 stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem 724 may include the logic to perform selected aspects of the method of
These software modules are generally executed by processor 714 alone or in combination with other processors. Memory 725 used in the storage subsystem 724 can include a number of memories including a main random access memory (RAM) 730 for storage of instructions and data during program execution and a read only memory (ROM) 732 in which fixed instructions are stored. A file storage subsystem 726 can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem 726 in the storage subsystem 724, or in other machines accessible by the processor(s) 714.
Bus subsystem 712 provides a mechanism for letting the various components and subsystems of computing device 710 communicate with each other as intended. Although bus subsystem 712 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses.
Computing device 710 can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computing device 710 depicted in
The column on the far right of the matrix shows the percentages of all the examples predicted to belong to each class that are correctly (bolded) and incorrectly (italicized) classified. These metrics are often called the precision (or positive predictive value) and false discovery rate, respectively. The row at the bottom of the plot shows the percentages of all the examples belonging to each class that are correctly and incorrectly classified. These metrics are often called the recall (or true positive rate or sensitivity) and false negative rate, respectively. The cell in the bottom right of the plot shows the overall accuracy.
The confusion matrix of
While several inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure.
All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms. The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.”
The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
As used herein in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of,” or, when used in the claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of.” “Consisting essentially of,” when used in the claims, shall have its ordinary meaning as used in the field of patent law.
As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
It should also be understood that, unless clearly indicated to the contrary, in any methods claimed herein that include more than one step or act, the order of the steps or acts of the method is not necessarily limited to the order in which the steps or acts of the method are recited.
In the claims, as well as in the specification above, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of” shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 2111.03. It should be understood that certain expressions and reference signs used in the claims pursuant to Rule 6.2(b) of the Patent Cooperation Treaty (“PCT”) do not limit the scope.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2021/061962 | 5/6/2020 | WO |
Number | Date | Country | |
---|---|---|---|
61022752 | Jan 2008 | US |