Aspects of the present disclosure relate to electrocardiogram (ECG) interpretation, and in particular to search and classification of ECGs to aid in ECG interpretation.
Cardiovascular diseases are the leading cause of death in the world. In 2008, 30% of all global death can be attributed to cardiovascular diseases. It is also estimated that by 2030, over 23 million people will die from cardiovascular diseases annually. Cardiovascular diseases are prevalent across populations of first and third world countries alike, and affect people regardless of socioeconomic status.
Arrhythmia is a cardiac condition in which the electrical activity of the heart is irregular or is faster (tachycardia) or slower (bradycardia) than normal. Although many arrhythmias are not life-threatening, some can cause cardiac arrest and even sudden cardiac death. Indeed, cardiac arrhythmias are one of the most common causes of death when travelling to a hospital. Atrial fibrillation (A-fib) is the most common cardiac arrhythmia. In A-fib, electrical conduction through the ventricles of heart is irregular and disorganized. While A-fib may cause no symptoms, it is often associated with palpitations, shortness of breath, fainting, chest pain or congestive heart failure and also increases the risk of stroke. A-fib is usually diagnosed by taking an electrocardiogram (ECG) of a subject. To treat A-fib, a patient may take medications to slow heart rate or modify the rhythm of the heart. Patients may also take anticoagulants to prevent stroke or may even undergo surgical intervention including cardiac ablation to treat A-fib. In another example, an ECG may provide decision support for Acute Coronary Syndromes (ACS) by interpreting various rhythm and morphology conditions, including Myocardial Infarction (MI) and Ischemia.
Often, a patient with A-fib (or other type of arrhythmia) is monitored for extended periods of time to manage the disease. For example, a patient may be provided with a Holter monitor or other ambulatory electrocardiography device to continuously monitor the electrical activity of the cardiovascular system for e.g., at least 24 hours. Such monitoring can be critical in detecting conditions such as acute coronary syndrome (ACS), among others.
The American Heart Association and the European Society of Cardiology recommends that a 12-lead ECG should be acquired as early as possible for patients with possible ACS when symptoms present. Prehospital ECG has been found to significantly reduce time-to-treatment and shows better survival rates. The time-to-first-ECG is so vital that it is a quality and performance metric monitored by several regulatory bodies. According to the national health statistics for 2015, over 7 million people visited the emergency department (ED) in the United States (U.S.) with the primary complaint of chest pain or related symptoms of ACS. In the US, ED visits are increasing at a rate of or 3.2% annually and outside the U.S. ED visits are increasing at 3% to 7%, annually. In ACS ECG interpretation, the most accurate and specific method is to compare a current ECG with a previously recorded ECG of the same patient to see if there are any significant changes in the ST-T segments and the QRS complex.
The described embodiments and the advantages thereof may best be understood by reference to the following description taken in conjunction with the accompanying drawings. These drawings in no way limit any changes in form and detail that may be made to the described embodiments by one skilled in the art without departing from the spirit and scope of the described embodiments.
Computer-generated ECG interpretations have been used for many years, and many of the systems that generate them operate based on input from experts and predefined sets of criteria. Recently, the use of deep learning models to generate ECG interpretations has been explored but has not been widely applied to actual medical devices and systems. One of the main reasons is that machine learning models such as DNN models are generally a “black box,” in that they provide the interpretation of an ECG, but do not indicate why a certain result was reached. For a comprehensive multi-lead ECG interpretation, there are many classes such as e.g., rhythm and morphology interpretations, which usually require some explanation or reasoning for the final interpretations. This is unlike the simple types of detection performed by smart watches and other wearable devices for e.g., AFIB and sinus rhythm detection. Utilizing machine learning model based ECG interpretation while adding transparent reasoning for interpretation results is a very important task for further expanding the use of machine learning ECG interpretation models to a variety of clinical applications.
The present disclosure addresses the above-noted and other deficiencies by providing systems and methods for performing an ECG search based on a dual ECG and text embedding model. A processing device may train a text machine learning (ML) model to generate a text embedding based on a received text representation of an ECG diagnosis. The processing device may train, using the text ML model, an ECG encoding ML model to generate an ECG embedding based on received ECG leads data, wherein ECG embeddings generated from similar ECG leads data are proximate to each other in vector space. The processing device may populate a database with a plurality of ECG embeddings, each of the plurality of ECG embeddings generated based on ECG leads data of a previously diagnosed ECG. In response to receiving a query ECG, the processing device may generate, using the ECG ML model, a query embedding and may determine a similarity score between the query embedding and each of the plurality of ECG embeddings. The processing device may sort the ECG embeddings in descending order based on similarity score, and may display/visualize (or transmit to the local computing device 120 for display/visualization) the top K results.
The local computing device 120 may be coupled to one or more biometric sensors. For example, the local computing device 120 may be coupled to an ECG monitor 110 which may comprise a set of electrodes for recording ECG (electrocardiogram) data (also referred to herein as “taking an ECG”) of the first user’s heart. The ECG data can be recorded or taken using the set of electrodes which are placed on the skin of the first user in multiple locations. The electrical signals recorded between electrode pairs may be referred to as leads and
In some embodiments, the ECG monitor 110 may comprise a handheld ECG monitor (such as the KardiaMobile® or KardiaMobile® 6L device from AliveCor® Inc., for example) comprising a smaller number of electrodes (e.g., 2 or 3 electrodes). In these embodiments, the electrodes can be used to measure a subset of the leads illustrated in
The ECG data recorded by the ECG monitor 110 may comprise the electrical activity of the first user’s heart, for example. A typical heartbeat may include several variations of electrical potential, which may be classified into waves and complexes, including a P wave, a QRS complex, a T wave, and sometimes U wave as known in the art. The shape and duration of the P wave can be related to the size of the user’s atrium (e.g., indicating atrial enlargement) and can be a first source of heartbeat characteristics unique to a user.
The duration, amplitude, and morphology of each of the Q, R and S waves can vary in different individuals, and in particular can vary significantly for users having cardiac diseases or cardiac irregularities. For example, a Q wave that is greater than ⅓ of the height of the R wave, or greater than 40 ms in duration can be indicative of a myocardial infarction and provide a unique characteristic of the user’s heart. Similarly, other healthy ratios of Q and R waves can be used to distinguish different users’ heartbeats.
The ECG monitor 110 may be used by the first user to measure their ECG data and transmit the measured ECG data to the local computing device 120 using any appropriate wired or wireless connection (e.g., a Wi-Fi connection, a Bluetooth® connection, a near-field communication (NFC) connection, an ultrasound signal transmission connection, etc.).
The ECG data may be continually recorded by the user at regular intervals. For example, the interval may be once a day, once a week, once a month, or some other predetermined interval. The ECG data may be recorded at the same or different times of days, under similar or different circumstances, as described herein. The ECG data may also be recorded at the same or different times of the interval (e.g., the ECG data may be captured asynchronously). Alternatively, or additionally, the ECG data can be recorded on demand by the user at various discrete times, such as when the user feels chest pains or experiences other unusual or abnormal feelings, or in response to an instruction to do so from e.g., the user’s physician. In another embodiment, ECG data may be continuously recorded over a period of time (e.g., by a Holter monitor or by some other wearable device).
Each ECG data recording may be time stamped and may be annotated with additional data by the user or health care provider to describe user characteristics. For example, the local computing device 120 (e.g., the mobile app 101A thereof) may include a user interface for data entry that allows the user to enter their user characteristics including e.g., a user ID. The local computing device 120 may append the user characteristics to the ECG data and transmit the ECG data to the cloud services system 140.
The ECG data can be transmitted by the local computing device 120 to the cloud services system 140 for storage and analysis. The transmission can be real-time, at regular intervals such as hourly, daily, weekly and/or any interval in between, or can be on demand. The local computing device 120 and the cloud services system 140 may be coupled to each other (e.g., may be operatively coupled, communicatively coupled, may communicate data/messages with each other) via network 130. Network 130 may be a public network (e.g., the internet), a private network (e.g., a local area network (LAN) or wide area network (WAN)), or a combination thereof. In one embodiment, network 130 may include a wired or a wireless infrastructure, which may be provided by one or more wireless communications systems, such as a Wi-Fi hotspot connected with the network 130 and/or a wireless carrier system that can be implemented using various data processing equipment, communication towers (e.g., cell towers), etc. The network 130 may carry communications (e.g., data, message, packets, frames, etc.) between the local computing device 120 and the cloud services system 140.
Machine learning (ML) models are well suited for continuous monitoring of one or multiple criteria to identify anomalies or trends, big and small, in input data as compared to training examples used to train the model. The ML models described herein may be trained on ECG data from a population of users, and/or trained on other training examples to suit the design needs for the model. Machine learning models that may be used with embodiments described herein include by way of example and not limitation: Bayes, Markov, Gausian processes, clustering algorithms, generative models, kernel and neural network algorithms. Some embodiments utilize a machine learning model based on a trained neural network (e.g., a trained recurrent neural network (RNN) or a trained convolution neural network (CNN)).
For example, an ML model may comprise a trained CNN ML model that takes input data (e.g., ECG data) into convolutional layers (aka hidden layers), and applies a series of trained weights or filters to the input data in each of the convolutional layers. The output of the first convolutional layer is an activation map, which is the input to the second convolution layer, to which a trained weight or filter (not shown) is applied, where the output of the subsequent convolutional layers results in activation maps that represent more and more complex features of the input data to the first layer. After each convolutional layer a non-linear layer (not shown) is applied to introduce non-linearity into the problem, which nonlinear layers may include an activation function such as tanh, sigmoid or ReLU. In some cases, a pooling layer (not shown) may be applied after the nonlinear layers, also referred to as a downsampling layer, which basically takes a filter and stride of the same length and applies it to the input, and outputs the maximum number in every sub-region the filter convolves around. Other options for pooling are average pooling and L2-normalization pooling. The pooling layer reduces the spatial dimension of the input volume reducing computational costs and to control overfitting. The final layer(s) of the network is a fully connected layer, which takes the output of the last convolutional layer and outputs an n-dimensional output vector representing the quantity to be predicted. This may result in a predictive output. The trained weights may be different for each of the convolutional layers.
To achieve real-world prediction/detection, a neural network needs to be trained on known data inputs or training examples, thereby resulting in a trained CNN. To train a CNN, many different training examples (e.g., ECG data from users) are input into the model. A skilled artisan in neural networks will fully understand the description above provides a somewhat simplistic view of neural networks to provide some context for the present discussion and will fully appreciate the application of any neural network alone or in combination with other neural networks or other entirely different machine learning models will be equally applicable and within the scope of some embodiments described herein.
The memory 140A may further include an ECG encoder training module 141 and an ECG search module 143, each of which may be executed by the processing device 140B in order to perform some of the functions described herein. The processing device 140B may execute the ECG encoder training module 141 in order to train an ECG encoder for use with the ECG search module 143 as described in further detail herein. The memory 140A may further include training data 150 which may comprise text representations of a plurality of ECG diagnoses for use in training a text encoder 145 as discussed in further detail herein. The memory 140A may further include training data 155 which may comprise ECG recordings (i.e., raw leads data) and text representations of a corresponding diagnosis for each of a plurality of ECGs. As used herein, an ECG recording may refer to the raw leads data of an ECG.
Upon executing the ECG encoder training module 141, the processing device 140B may train a text encoder 145 (shown in
Embeddings make it easier to perform machine learning on large inputs such as sparse vectors representing words and can be learned and reused across models. The text representation of each ECG diagnosis of the training data 150 may be an ordered sequence of diagnosis codes representing the diagnosis generated for the ECG. Each diagnosis code in a sequence may represent observed conditions (e.g., code 22 = normal sinus rhythm), grammatical modifiers (ex. code 179 = and), and adverbs/adverbial phrases (ex. code 211 = with occasional). For example, the diagnosis code sequence [19, 221, 1766] translates to “sinus rhythm with premature ventricular complexes.” Although the embodiments of the present disclosure are described using an ordered sequence of diagnosis codes representing an ECG diagnosis as the text representation of the ECG diagnosis for example purposes, they are not limited in this way and may be realized using any appropriate text representation of ECG diagnoses.
The text encoder 145 may learn to encode sequences of diagnosis codes into a text embedding (vector in an embedding space) by training on a masked prediction task. Thus, for the first sequence of diagnosis codes, the processing device 140B may remove a diagnosis code from the sequence at random, and replace it with a <MASK> token (i.e., “mask” that diagnosis code) as shown in
It should be noted that what the text encoder 145 is really learning is a probability distribution of different diagnosis codes that could fit in the masked token which ultimately informs how sequences of diagnosis codes are to be understood/interpreted. More specifically, the representation function of the text encoder 145 may map a sequence of diagnosis codes to a text embedding (vector), and a classifier layer of the text encoder 145 may map a text embedding to a probability distribution of tokens. The classifier layer may be trained to predict the masked diagnosis code from the representation function’s embedding at the position of the masked diagnosis code. Because the diagnosis code in that position is masked, the representation function must generate this embedding from context (i.e., by using unmasked diagnosis codes in the sequence). Contexts which produce similar distributions are likely to have similar embeddings. For example, assume that there are two training instances: “normal sinus rhythm, normal ECG” and “sinus rhythm, normal ECG,” and that the second diagnosis code of each is masked (“normal sinus rhythm, <MASK>” and “sinus rhythm, <MASK>”). The text encoder 145 is likely to learn that “normal sinus rhythm” and “sinus rhythm” are similar (and produce similar context embeddings), since “normal ECG” is the target prediction for both.
Upon completion of the training of the text encoder 145, the text encoder 145 may receive a sequence of diagnosis codes and output a sequence of vectors (continuous real numbers) that capture all the diagnostic info that a physician or health care professional requires, and does so in such a way that similar diagnoses are close together in embedding space.
An ECG search may be implemented by training an ECG encoder 147 to learn a representation function that transforms an (e.g., 10 second 12-lead) ECG recording into a vector in an embedding space (referred to as “ECG embedding”). The ECG encoder 147 should have the same property as the text encoder 145 in that ECG’s with similar diagnoses will be pushed into same region of embedding space, and ECGs with different diagnoses will be pushed away into different regions. Thus, the processing device 140B may train the ECG encoder 147 using a joint embedding space between ECG recordings and text representations of corresponding diagnoses. To do this, the processing device 140B may use the representation function learned by the text encoder 145 to supervise the training of the ECG encoder 147. However, the processing device 140B may utilize a soft form of supervision that merely uses text embeddings as a starting point to learn joint embedding. The processing device 140B may train the ECG encoder 147 using training data 155 which may comprise ECG recordings (i.e., raw leads data) and text representations (i.e., sequences of diagnosis codes) of a corresponding diagnosis for each of a plurality of ECGs.
The ECG encoder 147 may comprise a convolutional network, a lead combiner, and a convolutional residual network (not shown in the FIGS.). The convolutional network may down sample and extract features from each lead independently (performing the same operation on each lead). The lead combiner may integrate and mix information from all of the leads. The convolutional residual network may perform additional processing and down sampling, using a technique sometimes referred to as an information bottleneck, wherein information is passed through a smaller space, thereby forcing the ECG encoder 147 to learn how to represent that information more efficiently and discard information that is extraneous or irrelevant. In this way, the processing device 140B may train the ECG encoder 147 to learn how to represent the raw leads data of each ECG of the training data 155 more efficiently and discard information that is unnecessary (as ECG recordings often have a significant amount of redundant information). In some embodiments, during training the processing device 140B may randomly zero out individual leads with 10% probability so as to make the ECG encoder 147 more robust to the effects of a bad lead contact and/or missing or corrupted lead data. Dropping out entire leads encourages the ECG encoder 147 to learn lead-independent features, rather than correlating its output strongly to one “best” lead or a subset of the “best” leads. The processing device 140B may train an ECG embedding projection layer 405 which may comprise a learnable linear transformation which transforms the output sequence of the ECG encoder 147 (ECG embedding) to the joint embedding space 410. The ECG embedding projection layer 405 may comprise a fully-connected layer (not shown) that outputs a vector of 256 length (the size of the joint embedding space 410). The ECG embedding projection layer 405 may also divide the output vector by its Euclidean normal (i.e., L2 normalize the output vector) so that the output vector is always a vector of unit length.
As shown in
The joint embedding space 410 is where the processing device 140B (executing ECG encoder training module 141) may apply a loss function for training the ECG encoder 147 so that it can learn to match ECG embeddings with corresponding text embeddings.
Upon receiving a query ECG from the user (e.g., via local computing device 120 as discussed herein), the processing device 140B may execute the ECG search module 143 in order to utilize the trained ECG encoder 147 to perform an ECG search.
The processing device 140B (executing ECG search module 143) may compute a similarity score between the query embedding and each ECG embedding in the database 605. ECG embedding vectors have 256 components, are normalized to unit length, and pairs of vectors may be compared using the vector dot product as a metric. Thus, the processing device 140B may use the dot product between the query embedding and an ECG embedding as the similarity score and may compute a similarity score for the query embedding and each ECG embedding. In some embodiments, the similarity scores can be computed quickly and in parallel using a distributed query engine such as Presto or Spark. The processing device 140B may sort the records in descending order based on similarity score, and may display/visualize (or transmit to the local computing device 120 for display/visualization) the top K results.
The use of a dual embedding model to perform an enhanced ECG search may be used in a variety of ways. In one example, the embodiments of the present disclosure may be used to find ECGs that are similar to a selected ECG (e.g., to determine whether a particular patient has had an ECG like the selected one before). In another example, the embodiments of the present disclosure may be used to identify trends, changes, and/or seasonality in a particular user’s cardiac health (e.g., to determine if an ECG is normal for a particular patient, or if there has been a change in their ECG that requires further analysis). In line with these examples, in some embodiments the processing device 140B may generate and display a timeline view of a patient’s ECG history, which may allow the user (e.g., a physician or nurse) to rapidly identify ECGs of interest. The user can select one ECG or a pair which will be displayed below the timeline, either as a single ECG or two ECGs side-by-side for comparison.
The timeline view of a patient’s ECG records can be used for serial comparison, where the first step is to determine if a significant change has occurred in the rhythm and/or morphology of the patient’s ECG records. A threshold of significant change can be established from the correlation of the dual-embedding variables. If the correlation is higher than the threshold, there is no significant change between the current ECG and the referenced one, so the interpretation status will not change. If the correlation is lower than the threshold, this may indicate that some significant changes have occurred. A further analysis on ECG parameters and embedding variables can define what type of changes have occurred, like ST-T change for an ACS case, or QRS duration change for a bundle branch block case, etc.
There may be situations where it is desirable to be able to focus on a specific number of conditions when performing an ECG search. For example, a data scientist may wish to mine a database of ECGs for records that may have a diagnosis. In another example, a physician may wish to search a patient’s ECG history, which is particularly relevant in the context of mobile/at home ECG users who often have many unlabeled ECGs. However, because the ECG encoder 147 is trained based on matching text embeddings as discussed above and without reference to any specific classification goal, execution of the ECG search module 143 may result in results that are more generic (and not focused on particular conditions). Thus, in some embodiments, the processing device 140B may execute classification module 142 (instead of ECG search module 143) in order to further train a classifier 149 to classify ECG search results that meet specific conditions that a user is trying to classify for as shown in
Referring simultaneously to
The text encoder 145 may learn to encode sequences of diagnosis codes into a vector in an embedding space (a text embedding) by training on a masked prediction task. Thus, for the first sequence of diagnosis codes, the processing device 140B may remove a diagnosis code from the sequence at random, and replace it with a <MASK> token as shown in
Upon completion of the training of the text encoder 145, the text encoder 145 may receive a sequence of diagnosis codes and output a sequence of vectors (continuous real numbers) that capture all the diagnostic info that a physician or health care professional requires, and does so in such a way that similar diagnoses are close together in embedding space.
At block 810, the processing device 140B may train an ECG encoder 147 to learn a representation function that transforms an (e.g., 10 second 12-lead) ECG recording into a vector in an embedding space (referred to as “ECG embedding”). The ECG encoder 147 should have the same property as the text encoder 145 in that ECG’s with similar diagnoses will be pushed into same region of embedding space, and ECGs with different diagnoses will be pushed away into different regions. Thus, the processing device 140B may train the ECG encoder 147 using a joint embedding space between ECG recordings and text representations of corresponding diagnoses. To do this, the processing device 140B may use the representation function learned by the text encoder 145 to supervise the training of the ECG encoder 147. However, the processing device 140B may utilize a soft form of supervision that merely uses text embeddings as a starting point to learn joint embedding. The processing device 140B may train the ECG encoder 147 using training data 155 which may comprise ECG recordings (i.e., raw leads data) and text representations (i.e., sequences of diagnosis codes) of a corresponding diagnosis for each of a plurality of ECGs.
The joint embedding space 410 is where the processing device 140B (executing ECG encoder training module 141) may apply a loss function for training the ECG encoder 147 so that it can learn to match ECG embeddings with corresponding text embeddings.
At block 815, the processing device 140B may prepare a searchable database 605 of ECG embeddings for each ECG in the ECG database 160 (which comprises a plurality of previously recorded ECGs and text representations of their diagnoses) by using the ECG encoder 147 to create ECG embeddings for each ECG in the ECG database 160. As shown in
There may be situations where it is desirable to be able to focus on a specific number of conditions when performing an ECG search. However, because the ECG encoder 147 is trained based on matching text embeddings as discussed above and without reference to any specific classification goal, execution of the ECG search module 143 may result in results that are more generic (and not focused on particular conditions). The method 850 may begin at block 855 and 860, which are similar to blocks 805 and 810 described above with respect to
In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a local area network (LAN), an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, a hub, an access point, a network access control device, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. In one embodiment, computer system 900 may be representative of a server.
The exemplary computer system 900 includes a processing device 902, a main memory 904 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM), a static memory 906 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 918, which communicate with each other via a bus 930. Any of the signals provided over various buses described herein may be time multiplexed with other signals and provided over one or more common buses. Additionally, the interconnection between circuit components or blocks may be shown as buses or as single signal lines. Each of the buses may alternatively be one or more single signal lines and each of the single signal lines may alternatively be buses.
Computing device 900 may further include a network interface device 908 which may communicate with a network 920. The computing device 900 also may include a video display unit 910 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 912 (e.g., a keyboard), a cursor control device 914 (e.g., a mouse) and an acoustic signal generation device 916 (e.g., a speaker). In one embodiment, video display unit 910, alphanumeric input device 912, and cursor control device 914 may be combined into a single component or device (e.g., an LCD touch screen).
Processing device 902 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 902 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 902 is configured to execute ECG search instructions 925, for performing the operations and steps discussed herein.
The data storage device 915 may include a machine-readable storage medium 928, on which is stored one or more sets of ECG search instructions 925 (e.g., software) embodying any one or more of the methodologies of functions described herein. The ECG search instructions 925 may also reside, completely or at least partially, within the main memory 904 or within the processing device 902 during execution thereof by the computer system 900; the main memory 904 and the processing device 902 also constituting machine-readable storage media. The ECG search instructions 925 may further be transmitted or received over a network 920 via the network interface device 908.
While the machine-readable storage medium 928 is shown in an exemplary embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) that store the one or more sets of instructions. A machine-readable medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read-only memory (ROM); random-access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or another type of medium suitable for storing electronic instructions.
The preceding description sets forth numerous specific details such as examples of specific systems, components, methods, and so forth, in order to provide a good understanding of several embodiments of the present disclosure. It will be apparent to one skilled in the art, however, that at least some embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known components or methods are not described in detail or are presented in simple block diagram format in order to avoid unnecessarily obscuring the present disclosure. Thus, the specific details set forth are merely exemplary. Particular embodiments may vary from these exemplary details and still be contemplated to be within the scope of the present disclosure.
Additionally, some embodiments may be practiced in distributed computing environments where the machine-readable medium is stored on and or executed by more than one computer system. In addition, the information transferred between computer systems may either be pulled or pushed across the communication medium connecting the computer systems.
Embodiments of the claimed subject matter include, but are not limited to, various operations described herein. These operations may be performed by hardware components, software, firmware, or a combination thereof.
Although the operations of the methods herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operation may be performed, at least in part, concurrently with other operations. In another embodiment, instructions or sub-operations of distinct operations may be in an intermittent or alternating manner.
The above description of illustrated implementations of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific implementations of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an embodiment” or “one embodiment” or “an implementation” or “one implementation” throughout is not intended to mean the same embodiment or implementation unless described as such. Furthermore, the terms “first,” “second,” “third,” “fourth,” etc. as used herein are meant as labels to distinguish among different elements and may not necessarily have an ordinal meaning according to their numerical designation.
It will be appreciated that variants of the above-disclosed and other features and functions, or alternatives thereof, may be combined into may other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations, or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims. The claims may encompass embodiments in hardware, software, or a combination thereof.