The present disclosure is directed generally to health care. More particularly, but not exclusively, various methods and apparatus disclosed herein relate to training and applying predictive models using discretized physiological sensor data.
The acquisition of continuous-time physiological signals, such as electrocardiogram (“ECG”) signals, photoplethysmogram (“PPG”) signals, and arterial blood pressure (“ABP”) signals, is becoming more wide-spread as patient monitoring technology evolves to offer cheaper and more mobile sensors. One challenge is extracting meaningful features from these waveforms for various down-stream tasks that assist a clinician in better identifying and managing acute patient conditions. One common approach to this problem is to manually extract a handful of well-known and clinically-validated features from the physiological signals and then train a machine learning model on these features. For example, heart-rate variability has been shown to be highly predictive of many acute conditions.
However, restricting a machine learning algorithm to a small set of manually-defined features is unnecessarily restrictive, especially with the advent of more advanced machine learning techniques (e.g., deep learning), which have been shown to be capable of automatically extracting fine-grained patterns and features from complex and high-dimensional datasets. The trouble with these automated techniques, though, is that they often function as a “black box”-they provide very accurate predictive models but it is often nearly impossible to interpret the extracted features or understand the network's internal logic. This can be problematic for healthcare applications when seeking regulatory approval or soliciting widespread trust of the algorithm within the clinical community.
The present disclosure is directed to methods and apparatus for training and applying predictive models using discretized physiological sensor data. For example, in various embodiments, a temporally-continuous stream of samples may be obtained from a physiological sensor such as an electrocardiogram (“ECG”), a photoplethysmogram (“PPG”), and/or an arterial blood pressure (“APB”) sensor. The continuous stream of samples may be “preprocessed,” e.g., by being divided into a training sequence of temporal chunks of the same or different temporal lengths. Labels indicative of health conditions may be associated with each temporal chunk of the training sequence. For example, a temporal chunk of a continuous stream of ECG samples that evidences atrial fibrillation (“AF”) may be labeled as such. In some embodiments, these temporal chunks may be further divided into what will be referred to herein as “beats,” which in some embodiments may be represented as feature vectors. The term “beats” as used herein is not limited to heart beats, nor is it necessarily related to heart beats, although that is the case in some embodiments.
Next, predictive models and embeddings may be learned by applying techniques similar to those often applied in the natural language processing (“NLP”) context. For example, each temporal chunk may be treated like a “sentence,” and the individual beats may be treated as individual “words” of the sentence. In some embodiments, the beats may be quantized such that a sequence of beats associated with a given temporal chunk are embedded into an embedding matrix (which in some cases may be a lookup table), e.g., to determine a corresponding vector. Consequently, a training sequence of vectors may be generated for the training sequence of temporal chunks. The training sequence of vectors may then be applied as input across a machine learning model, such as a recurrent neural network, to generate training output. The training output may be analyzed, e.g., by way of comparison with the aforementioned labels, and the machine learning model and/or the embedding matrix (or “embedding layer”) that precedes the machine learning model may be trained based on the comparison.
Once the machine learning model and/or embedding layer are trained, they can be used for various purposes. For example, learned weights and/or hyperparameters of the machine learning model and/or embedding layer can be fixed. Then, a new (or “live”) continuous stream of samples may be obtained, e.g., from the same type of physiological sensor, and preprocessed as described above. The preprocessed data, which may include an unlabeled sequence of vectors generated based on the live continuous stream of samples, may be applied as input across the trained machine learning model to generate output. The output may be indicative of a prediction of one or more health conditions. For example, in the ECG context, one kind of prediction that may be made using models/embedding layers training with techniques described herein is AF.
In some embodiments, the aforementioned embedding layer may be amenable to interpretation, e.g., to lessen the “black box” appearance of the trained machine learning model. For example, the embedding layer may be decomposed, e.g., using eigenvalue analysis. This analysis can be used in some cases to generate a visualization, e.g., for display on a computer display and/or to be printed, of the learned discretized embeddings. For example, the visualization might show that most of the information learned by the embedding layer is contained in particular dimensions, and the rest of the embedding layer may be sparse. In some cases clusters may become evident in the ranges of highest correlation. This information may enable training of a model with fewer dimensions than the original model. As another example specific to the ECG context, Euclidian distances between bins may reveal how the embedding layer distinguishes normal RR intervals from AF RR intervals.
Techniques described herein may give rise to a variety of technical advantages. For example, the machine learning models described herein strike a balance between conventional manual extraction of clinically relevant and well-understood features with automated feature extraction using deep learning. Additionally, in some embodiments, the continuous streams of samples (e.g., waveforms) may be processed into lower dimensional representations (e.g., quantized beats), which require less data and hence are advantageous in mobile applications wherein network traffic and battery life are important. Furthermore, techniques described herein take advantage of the pseudo-periodic nature of many physiological signals (e.g., ECG, PPG, APB) by, for instance, decomposing the physiological signal into a sequence of quantized beats that evolve over multiple (e.g., cardiac) cycles, and detecting patterns in the quantized beats that can be used to make predictions about clinical conditions. Moreover, techniques described herein leverage NLP techniques to learn from temporal (e.g., inter-beat) patterns in physiological signals.
Generally, in one aspect, a method may include: obtaining a first continuous stream of samples measured by one or more physiological sensors; discretizing the first continuous stream of samples to generate a training sequence of quantized beats; determining a training sequence of vectors corresponding to the training sequence of quantized beats, wherein each vector of the training sequence of vectors is determined based on a respective quantized beat of the training sequence of quantized beats and an embedding matrix; associating a label with each vector of the training sequence of vectors, wherein each label is indicative of a medical condition that is evidenced by samples of the first continuous stream obtained during a time interval associated with the respective vector of the training sequence of vectors; applying the training sequence of vectors as input across a neural network to generate corresponding instances of training output; comparing each instance of training output to the label that is associated with the corresponding vector of the training sequence of vectors; based on the comparing, training the neural network and the embedding matrix; obtaining a second continuous stream of samples from one or more of the physiological sensors; discretizing the second continuous stream of samples to generate a live sequence of quantized beats; determining a live sequence of vectors corresponding to the live sequence of quantized beats, wherein each vector of the live sequence of vectors is determined based on a respective quantized beat and the embedding matrix; applying the live sequence of vectors as input across the neural network to generate corresponding instances of live output; and providing (624), at one or more output devices operably coupled with one or more of the processors, information indicative of the live output.
In various embodiments, discretizing the first continuous stream of samples may includes: organizing the first continuous stream of samples into a first sequence of temporal chunks of samples; and, for each given temporal chunk of samples of the first sequence of temporal chunks of samples: discretizing the given temporal chunk of samples into a quantized beat of the training sequence of quantized beats; and matching the quantized beat to one of a predetermined number of bins. In various embodiments, each bin of the predetermined number of bins may correspond to a predetermined vector of the embedding matrix.
In various embodiments, the first and second continuous streams of samples may include electrocardiogram data. In various embodiments, each quantized beat of the training and live sequences of quantized beats corresponds to an RR interval. In various embodiments, one or both of the first and second continuous streams of samples may be discretized at one or more of the physiological sensors, and one or both of the training sequence of quantized beats and the live sequence of quantized beats may be provided by one or more of the physiological sensors to the one or more processors. In various embodiments, one or both of the first and second continuous streams of samples may be discretized using one or more additional neural networks.
In various embodiments, the neural network may include a recurrent neural network. In various embodiments, the recurrent neural network may include a long short-term memory. In various embodiments, training the neural network may include applying back propagation with stochastic gradient descent. In various embodiments, training the embedding matrix may include determining weights of the embedding matrix.
In various embodiments, the method may further include: applying eigenvalue analysis to the embedding matrix to generate a visualization of the embedding matrix; and rendering the visualization of the embedding matrix on a display device. Systems and non-transitory computer-readable media are also described herein for performing one or more methods described herein.
It should be appreciated that all combinations of the foregoing concepts and additional concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. In particular, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the inventive subject matter disclosed herein. It should also be appreciated that terminology explicitly employed herein that also may appear in any disclosure incorporated by reference should be accorded a meaning most consistent with the particular concepts disclosed herein.
In the drawings, like reference characters generally refer to the same parts throughout the different views. Also, the drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the disclosure.
The acquisition of continuous-time physiological signals, such as electrocardiogram (“ECG”) signals, photoplethysmogram (“PPG”) signals, and arterial blood pressure (“ABP”) signals, is becoming more wide-spread as patient monitoring technology evolves to offer cheaper and more mobile sensors. One challenge is extracting meaningful features from these waveforms for various down-stream tasks that assist a clinician in better identifying and managing acute patient conditions. One common approach to this problem is to manually extract a handful of well-known and clinically-validated features from the physiological signals and then train a machine learning model on these features. However, restricting a machine learning algorithm to a small set of manually-defined features is unnecessarily restrictive, especially with the advent of more advanced machine learning techniques (e.g., deep learning), which have been shown to be capable of automatically extracting fine-grained patterns and features from complex and high-dimensional datasets. The trouble with these automated techniques, though, is that they often function as a “black box”-they provide very accurate predictive models but it is often nearly impossible to interpret the extracted features or understand the network's internal logic. In view of the foregoing, various embodiments and implementations of the present disclosure are directed to training and applying predictive models using discretized physiological sensor data
Physiological sensors 104 may come in various forms to generate various physiological signals. In some embodiments, physiological sensors 104 may include an ECG sensor that produces a continuous ECG signal. Additionally or alternatively, in some embodiments, physiological sensors 104 may include a PPG sensor that produces a PPG signal. Additionally or alternatively, in some embodiments, physiological sensors 104 may include an ABP sensor that produces an ABP signal. In some embodiments, one or more aspects of the preprocessing described below may be performed by the physiological sensor 104 itself, and the preprocessed data may be provided to logic 102. This may conserve memory and/or network bandwidth, which may be important if logic 102 is part of a resource-constrained device such as a mobile phone.
Logic 102 (and in some cases, physiological sensor(s) 104) may be configured to perform various aspects of the present disclosure.
Now, aspects of the data flow 208 of
In various embodiments, logic 102 (and in some cases, physiological sensor(s) 104) may be configured to preprocess the first continuous stream of samples to generate a training sequence of quantized beats. In some embodiments, a first step of preprocessing may be to divide the first continuous sequence of samples into temporal chunks, resulting in a sequence of temporal chunks, x1, x2, . . . , xn. Each temporal chunk x may be represented, for instance, as a feature vector that includes a sequence of samples obtained from the first continuous stream of samples during a particular time interval (e.g., five seconds, ten seconds, thirty seconds, etc.). While each temporal chunk may contain the same number of samples, this is not required.
In some embodiments, a next step of preprocessing may include transforming each temporal chunk of samples into a sequence of what will be referred to herein as “beats.” In various embodiments, each “beat” may be represented by a set of beat-level features (e.g., a feature vector). Mathematically, this new representation can be expressed as a matrix Xi∈b
In some embodiments, a last step of preprocessing may be to quantize the beat-level features into a set of bins, i.e., to embed the beat-level features into the embedding layer 210 of q
In some embodiments, and in some cases after the continuous stream of samples is preprocessed, logic 102 may associate a label with each quantized beat of the training sequence of quantized beats and/or to each vector of training sequence of vectors (described below). As noted above, each quantized beat of the training sequence is generated from samples of the continuous stream that were obtained during a corresponding time interval (e.g., five seconds, ten seconds, thirty seconds, sixty seconds, etc.)—that is, from a temporal chunk of samples. Each temporal chunk of samples may have a label assigned to it, e.g., by a clinician, that indicates a health condition evidenced by the temporal chunk of samples. For example, a cardiologist may manually label temporal chunks of an ECG signal as, for instance, “normal” or “AF.” These same labels may be associated with the quantized beats that were generated from the corresponding temporal chunks of samples.
Referring primarily to
For each embedding vector, logic 102 may compare the output 216 to the label associated with the given temporal chunk of samples from which the embedding vector was generated. Based on the comparing, logic 102 may train neural network 212. For example, in some embodiments, logic 102 may employ well-known techniques such as back propagation with stochastic gradient descent to alter weights of the hidden layers 2141-N and, in some cases, weights of the embedding layer 210 as well. Thus, in addition to training both embedding layer 210 and neural network 212 to make predictions about health conditions, embedding layer 210, once trained, may be analyzed on its own (e.g., by being visualized as described below) to identify various information about correlations, etc.
Once neural network 212 and embedding layer 210 are trained, they may be used to predict health conditions based on subsequent unlabeled continuous streams of samples received from physiological sensor 104. For example, a subsequent continuous stream of samples may be acquired at/obtained from physiological sensor 104. The subsequent continuous stream of samples may be preprocessed/embedded into embedding layer 210 in a manner similar to that described above to generate what will be referred to herein as a “live” sequence of embedding vectors.
Logic 102 may then apply the unlabeled embedding vectors as input across the neural network 212 to generate “live” output 216. This “live” output may be indicative of a prediction of a health condition, such as AF. In some embodiments, logic 102 may provide, e.g., at one or more output devices operably coupled with logic 102 (e.g., a display device, a speaker, a printout, etc.), information indicative of the health condition prediction. For example, if AF is detected, logic 102 may cause an alarm to be raised, which in some cases may cause one or more communications (e.g., emails, text messages, pages, etc.) to be transmitted to medical personnel. Additionally or alternatively, a log of a patient's health condition(s) over time may be generated for later analysis by a clinician.
In the training stage of embedding layer 210 and neural network 212, the model parameters may be tuned based on a set of labeled data. Let ({circumflex over (X)}1, y1), . . . , ({circumflex over (X)}n, yn) denote a training dataset in which {circumflex over (X)}i is a discretized beat-level feature matrix as described in the preprocessing section above, and yi is the corresponding label assigned to that sequence. The model used to predict yi from {circumflex over (X)}i may in some embodiments be a deep learning recurrent neural network, e.g., inspired by the use of word embeddings in the natural language processing (“NLP”) domain. Analogous to how a sentence may be decomposed into a set of words arising from a finite vocabulary, so each example {circumflex over (X)}i can be thought of as a “beat sentence,” with each beat also arising from a finite vocabulary due to the discretization process described in the preprocessing section above. The goal may be to learn patterns in the “beat sentence” structure that are predictive of the corresponding label.
The purpose of embedding layer 210 is to take a discrete-valued input and map it into a higher-dimensional continuous space. Mathematically, the embedding is defined by a function ƒ:p→
d. In various embodiments, the input may be a discretized beat represented by a p-dimensional discrete vector, and the output may be a continuous-valued d-dimensional vector which will be fed as input to the subsequent recurrent neural network (e.g., 212 in
In various embodiments, the output of embedding layer 210 may be applied as input across a neural network 212 (e.g., a deep recurrent neural network, a long short-term memory (“LSTM”) network, or a convolutional neural network), to learn sequential patterns in the “beat sentences” that are predictive of the corresponding label y. The specifics of the network architecture will vary depending on the application, but in some embodiments, the network may contain one or more stacked recurrent neural network layers (e.g., hidden layers 2141-N in
In various embodiments, parameters of neural network 212 may be trained in conjunction with the parameters from embedding layer 210 to minimize an objective function that quantifies the difference between the predicted output ŷ and the true label y, e.g., such as binary cross-entropy loss. A variety of different optimization algorithms may be applied to minimize this loss, including but not limited to back-propagation with stochastic gradient descent.
After training, in the execution phase, the parameters of embedding layer 210 and neural network 212 that were optimized during the training phase are held fixed. Continuous streams of samples from physiological sensor 104 may be preprocessed as described above and applied as input across embedding layer 210 and neural network 212. The output 216 may include real-time predictions of health conditions. Output 216 may be used to notify clinicians (if necessary) in various ways, such as by raising one or more audio and/or visual alarms, transmitting communications to appropriate computing devices (e.g., by way of text message, alerts, emails, etc.), and/or generating reports, e.g., that document a patient's condition over time.
As noted above, in various embodiments, the trained embedding layer 210 may be interpreted, e.g., using eigenvalue analysis, to make various determinations. In some embodiments, trained embedding layer 210 may be interpreted to make the overall model less opaque (or “black-box”). In some embodiments, after training, embedding layer 210, or θ, may be decomposed through eigenvalue analysis as follows:
θ=U∧UT
where Λ=diag(λ1, . . . , λn), λ1, . . . , λn is a diagonal matrix of the eigenvalues, and U=[e1, . . . , en] is a matrix of the eigenvectors.
At the top of
In some embodiments, these discretized values representing RR intervals may be used to quantize the waveforms, i.e., to embed representative data into embedding layer 210. In some embodiments, embedding layer 210, which is also referred to as “word embedding, θ1” in
When a particular RR interval value is mapped to a particular bin, the embedding vector (or row) of embedding matrix 210 that corresponds to the mapped bin may be added to a sequence of embedding vectors. Thus, if ten discretized beats are generated, and each beat includes, as a feature, an RR interval, then ten rows will be generated. These rows are applied as input across neural network 212 (which in this example is a recurrent neural network followed by an LSTM layer). In
At block 602, the system may obtain a first continuous stream of samples from one or more physiological sensors (e.g., 104). As noted above, various types of physiological sensors 104 may provide such data, such as ECG, PPG, APD, etc. This first continuous stream of samples may be used for training, and thus need not be real time data. More typically, it would be physiological sensor data that is studied by a clinician beforehand. The clinician may label various temporal chunks of the signal with various labels, such as “normal,” “AF,” or with other labels indicative of other health conditions.
At block 604, the system may discretize the first continuous stream of samples to generate a training sequence of quantized beats. As noted above, the preprocessing may include dividing the samples into the aforementioned temporal chunks. At block 606, the system may determine a training sequence of (embedding) vectors that correspond to the training sequence of quantized beats. For example, each quantized beat may be matched to a bin as described above, and then the bin may be used to select a vector (or row) from an embedding matrix (e.g., embedding layer 210). At block 608, the system may associate a label with each vector of the training sequence of vectors. Each label may be indicative of a medical condition that is evidenced by samples of the first continuous stream obtained during the time interval associated with the temporal chunk from which the vector was determined.
At block 610, the system may apply the training sequence of vectors as input across a neural network to generate corresponding instances of training output. At block 612, the system may compare each instance of training output to the label that is associated with the corresponding vector of the training sequence of vectors (i.e., the vector that was applied as input to generate the instance of output). At block 614, based on the comparing, the system may train the neural network, e.g., using back propagation and stochastic gradient descent. In some embodiments, both the neural network and the embedding layer (210 in
At block 616, the system may obtain a second (e.g., unlabeled) continuous stream of samples from one or more of the physiological sensors. In many cases, the second continuous stream of samples may be a “live” or “real time” stream of samples, though this is not required. At block 618, the system may discretize the second continuous stream of samples to generate a live sequence of quantized beats, as was described above with respect to block 604. At block 620, the system may determine a live sequence of unlabeled vectors corresponding to the live sequence of quantized beats, similar to block 606 of
In some embodiments, the operations of method 600B may or may not be performed. Instead, the embedding matrix (embedding layer 210) that is trained from method 600A may be used as described to generate visualizations such as those depicted in
User interface input devices 722 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computing device 710 or onto a communication network.
User interface output devices 720 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computing device 710 to the user or to another machine or computing device.
Storage subsystem 724 stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem 724 may include the logic to perform selected aspects of the method of
These software modules are generally executed by processor 714 alone or in combination with other processors. Memory 725 used in the storage subsystem 724 can include a number of memories including a main random access memory (RAM) 730 for storage of instructions and data during program execution and a read only memory (ROM) 732 in which fixed instructions are stored. A file storage subsystem 726 can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem 726 in the storage subsystem 724, or in other machines accessible by the processor(s) 714.
Bus subsystem 712 provides a mechanism for letting the various components and subsystems of computing device 710 communicate with each other as intended. Although bus subsystem 712 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses.
Computing device 710 can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computing device 710 depicted in
While several inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure.
All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.”
The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
As used herein in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of,” or, when used in the claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of.” “Consisting essentially of,” when used in the claims, shall have its ordinary meaning as used in the field of patent law.
As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
It should also be understood that, unless clearly indicated to the contrary, in any methods claimed herein that include more than one step or act, the order of the steps or acts of the method is not necessarily limited to the order in which the steps or acts of the method are recited.
In the claims, as well as in the specification above, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of” shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 2111.03. It should be understood that certain expressions and reference signs used in the claims pursuant to Rule 6.2(b) of the Patent Cooperation Treaty (“PCT”) do not limit the scope.
The present application claims the benefit of U.S. Provisional Application Ser. No. 62/583,128, filed Nov. 8, 2017, which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62583128 | Nov 2017 | US |