The present disclosure pertains deep neural networks for classifying recorded waveforms representing a physiological characteristic of a human body and, more particularly a technique for training such a deep neural network.
This section of this document introduces information about and/or from the art that may provide context for or be related to the subject matter described herein and/or claimed below. It provides background information to facilitate a better understanding of the various aspects of the that which is claimed below. This is a discussion of “related” art. That such art is related in no way implies that it is also “prior” art. The related art may or may not be prior art. The discussion in this section of this document is to be read in this light, and not as admissions of prior art.
The physiological state of a patient in a clinical setting is frequently represented by monitoring one or more physiological characteristics of the patient over time. The monitoring usually includes capturing data using sensors, the captured data representing the physiological characteristic. Because the data is representative of a physiological characteristic, the data may be referred to as “physiological data”.
In one common example known as electrocardiography, a number of electrodes are placed at predetermined points on the patient's body. Each sensor produces a voltage signal that is captured and graphed. The graph, or electrocardiogram (“ECG”, or “EKG”), provides a visual representation of the magnitude of the voltage signal over time. A trained clinician or technician can then determine how “normally” or “non-normally” the patient's heart is beating. Thus, in this context, the captured physiological data is representative of the patient's heartbeat.
The data captured by such monitoring is typically ordered by at least the time of acquisition and an attribute of the physiological characteristic as represented in the data. In the ECG example above, the data is ordered not only by time of acquisition and the magnitude of the voltage, but also by the sensor through which the data was acquired. The voltage acquired by an individual sensor can be rendering it for human perception as a graph, or plot, of the voltage magnitude over time referred to as a “waveform”. It is also common to collectively refer to the physiological data as a “waveform”. The result of the ECG, then, is a set of waveforms reflecting the patient's heartbeat.
It was mentioned above relative to the ECG that a trained clinician or technician may analyze “waveforms” to determine certain aspects of the patient's condition captured in the physiological data. For a variety of reasons, the art has turned to “intelligent machines”, or computing machines employing various kinds of artificial intelligences, to perform this analysis or evaluation. However, like the clinician or technician, the intelligent machine is trained to perform this analysis.
In some embodiments, a method comprises training an autoencoder with a set of unlabeled input samples through unsupervised learning and training a deep neural network through supervised learning using the trained autoencoder. The unlabeled input samples may be recorded waveforms representing a physiological characteristic of a human body. Training the deep neural network through supervised learning using the trained autoencoder, includes: training the deep neural network with a first subset of manually labeled samples selected from the set of unlabeled samples; and iteratively training the deep neural network with a plurality of successive subsets of manually labeled samples drawn from the unlabeled samples until convergence or until the unlabeled sample inputs are exhausted. Each successive subset comprises a plurality of selected, distanced, unlabeled samples with the least confidence from the remaining unlabeled samples to which labels are propagated, the distance determination including using the autoencoder for feature extraction.
In other embodiments, a method, comprises: training an autoencoder with a plurality of unlabeled input samples through unsupervised learning and training a deep neural network through supervised learning using the trained autoencoder. The unlabeled input samples being recorded waveforms representing a physiological characteristic of a human body. Training a deep neural network through supervised learning using the trained autoencoder, includes: manually labeling a first predetermined number of randomly selected unlabeled samples from the plurality of unlabeled input samples to generate a first subset of labeled samples; training the deep neural network with the first subset of labeled samples, manually labeling a second predetermined number of selected, distanced, unlabeled samples to generate a second subset of labeled samples, propagating labels to the second predetermined number of selected, distanced, unlabeled samples from among the remaining unlabeled samples that are closest to the labeled samples; and iterating until either convergence or the remaining unlabeled samples are exhausted. Manually labeling a second predetermined number of selected, distanced, unlabeled samples to generate a second subset of labeled samples may include: selecting a plurality of unlabeled samples with the least confidence from the remaining unlabeled samples; and filtering the selected plurality of unlabeled samples to discard selected unlabeled samples that are too close to another selected unlabeled sample using the trained autoencoder for feature extraction of the compared selected unlabeled sample.
In still other embodiments, there is a method for use in annotating a plurality of recorded waveforms representing a physiological characteristic of a human body. The method comprises: providing a set of unlabeled samples of the recorded waveforms; training a deep neural network with the unlabeled samples to develop an autoencoder; receiving a plurality of manual labels for a first predetermined number of randomly selected, unlabeled samples; augmenting the manually labeled, randomly selected samples; training a deep neural network with the augmented, manually labeled, randomly selected samples; applying the trained deep neural network to the remaining unlabeled samples; receiving a second predetermined number of selections of the remaining unlabeled sample; propagating labels to a third predetermined number of the remaining unlabeled samples that were closest to the labeled samples; and iterating until either convergence or the remaining unlabeled samples are exhausted. In these embodiments, receiving the second predetermined number of selections of the remaining unlabeled samples, the selection includes: identifying the second predetermined number of candidate unlabeled samples having the least confidence; and filtering the identified candidate unlabeled. The filtering then includes using the trained autoencoder for feature extraction to determine whether each identified candidate is too close to an immediately prior identified candidate; if an identified candidate is too close to the immediately prior identified candidate, discarding the identified candidate; identifying a replacement candidate for the discarded candidate; and iterating the identifying and filtering until the second predetermined number of unlabeled samples has been identified and filtered;
In still other embodiments, a computing apparatus, comprises: a processor-based resource; and a memory electronically communicating with the processor-based resource. The memory may be encoded with instructions that, when executed by the processor-based resource, perform the methods set forth herein.
Still other embodiments include a non-transitory, computer-readable memory encoded with instructions that, when executed by a processor-based resource, perform the methods set forth herein.
Illustrative embodiments of the subject matter claimed below will now be disclosed. In the interest of clarity, not all features of an actual implementation are described in this specification. It will be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort, even if complex and time-consuming, would be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.
While the invention is susceptible to various modifications and alternative forms, the drawings illustrate specific examples herein described in detail by way of example. It should be understood, however, that the description herein of specific examples is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.
Machine learning and deep learning is a very powerful mathematic tool to solve classification and regression problems. Supervised learning and unsupervised learning are two major methods.
Supervised learning makes predictions based on a set of labeled training examples that users provide. This technique is useful when one knows what the outcome should look like. Supervised learning is usually used to predict future outcome (regression problem) or classify the input data (classification problem). In supervised learning, one generates annotations/labels for training samples, then train the model with these training samples, then test and deploy your model to your system.
In unsupervised learning, the data points are not labeled—the algorithm labels them itself by organizing the data or describing its structure. This technique is useful when it is not known what the outcome should look like, and one is trying to find hidden structure inside data sets. For example, one might provide customer data, and want to create segments of customers who like similar products. The data that is provided isn't labeled, and the labels in the outcome are generated based on the similarities that were discovered between input data.
One drawback to supervised learning is that labeling, or annotating, the input samples may be arduous and time consuming. Labeling a group of input samples generally involves a knowledgeable and trained person examining each input sample, determining a “correct” outcome for each input sample, and then annotating each input sample with that respective correct outcome. The number of input samples may also be large. These kinds of factors, when present, cumulatively assure that annotating the input samples is a long and difficult task.
This disclosure presents a technique including a method and an apparatus for supervised training of a neural network that greatly reduces the time, cost, and effort for annotating the training set and training a deep neural network. This technique can be referred to as a “smart annotation” technique because it engenders these savings in resources. As used herein, a smart annotation technique is one that uses, for example, an autoencoder trained via unsupervised learning and applies both supervised and unsupervised learning to train a neural network. The supervised learning includes least confidence assessment combined with label propagation and an autoencoder to train the neural network. The technique includes not only such a method, but also a computing apparatus programmed to perform such a method by executing instructions stored on a computer-readable, non-transitory storage medium using a processor-based resource.
The technique as disclosed herein presumes that a set of input samples has previously been acquired. The input samples in the disclosed embodiments are ECG waveforms. However, it is to be understood that this is for illustration only, and that embodiments may also operate on input samples of waveforms acquired using other processes. For example, it is common in patient care to monitor a number of physiological characteristics of a patient that can be acquired using other procedures and represented as a waveform. The technique described herein may be used in conjunction with such other waveforms acquired by other procedures and representative of other physiological characteristics. Examples of such other physiological characteristics include, but are not limited to invasive pressures (i.e. invasive blood pressure), gas output from respiration (i.e. oxygen, carbon dioxide), blood oxygenation, internal pressures (i.e. intracranial pressures), measurement of concentration of anesthesia agents (i.e. nitrous oxide), etc.
Furthermore, the input samples disclosed herein are “organic” in the sense that they have been acquired by actually perform ECG procedures on patient(s). The input samples in other embodiments may be synthetic in the sense that they have been acquired by artificially generating them. Still other embodiments may use a combination of organic and synthetic waveforms.
ECG waveforms are acquired over time and represent a number of heartbeats occurring within a predetermined window of time. These waveforms may be sampled as portions of the larger ECG waveform, each portion representing a single heartbeat. The ECG waveforms in the input samples of the illustrated embodiment may therefore be referred to as “beats” because they are portions of a larger waveform representing a single heartbeat. Thus, the input samples for the disclosed technique may be obtained by sampling portions of larger waveforms.
This particular embodiment will classify input samples (or, “beats”) as either “normal” or “non-normal”—that is, whether the heartbeat they represent is normal or non-normal. For example, in the illustrated embodiment where the input samples are beats, a beat showing an atrial fibrillation (or, “afib”) would be classified as non-normal. Using the terminology above, each outcome of the evaluation process will be either “normal” or “non-normal”. The terms “normal” and “non-normal” are clinically defined and those definitions are well known to those skilled in the art of evaluating ECG waveforms.
For example, for ECG beat classification, each training sample includes one beat as shown in
In the illustrated embodiment, ECG beats are classified to two classes, as mentioned above. One class is normal beat, the other is non-normal beat, which includes: Left bundle branch block, Right bundle branch block, Bundle branch block, Atrial premature, Premature ventricular contraction, Supraventricular premature or ectopic beat (atrial or nodal), Ventricular escape. Some examples are shown in
The disclosed technique uses unsupervised learning to build an autoencoder by training with unlabeled data. An autoencoder is a type of neural network that can be used to learn a compressed representation of raw data. An autoencoder is composed of an encoder and a decoder sub-model. The encoder compresses the input and the decoder attempts to recreate the input from the compressed version provided by the encoder. After training, the encoder model is saved, and the decoder is discarded. One suitable autoencoder and its use for feature extraction are described further below relative to
N samples are then randomly picked and are labeled by hand. In one embodiment, N=56, for example. The labelled samples are then trained with a deep neural network with data augmentation (noise with small variance). The data augmentation reduces overfitting during training. The present technique then runs a trained deep neural network on remaining unlabeled samples, use the least confidence to pick N samples that are then labeled by hand. This is a general method used in active learning, but the presently disclosed technique modifies the conventional approaches. In particular, if two new samples are close enough (using trained Autoencoder to extract the features and the feature distance between samples is used to represent sample distance), the present technique will ignore the second one and keep looking for the next least confidence sample.
Next, label propagation is applied for N samples. Within unlabeled samples, N samples closest to labeled samples are identified and added for training. But each labeled sample only propagates one sample. So, each iteration, there are 2N new training samples. N samples from user annotation and N samples are from automatic label propagation. The present approach then returns to train labeled samples with the deep neural network and iterates until the accuracy changes less than 0.25% in five iterations.
The presently disclosed technique may also be illustrated graphically as shown in
More particularly,
Note, however, that other embodiments may be used to analyze and/or classify input samples representing some other physiological characteristic. Examples of such other physiological characteristics include, but are not limited to, invasive pressures (e.g., invasive blood pressure), gas output from respiration (e.g., oxygen, carbon dioxide), blood oxygenation, internal pressures (e.g., intracranial pressures), measurement of concentration of anesthesia agents (e.g., nitrous oxide), etc. The classification of the input samples may therefore vary in some embodiments to accommodate aspects of the physiological characteristic of interest to a clinician. The number of classifications might also vary from two to three or more. For example, in some embodiments, rather that classifying the input samples as “normal” or “non-normal”, a sample might be classified as “high”, “normal”, or “low”. Still other physiological characteristics and classifications may become apparent to those skilled in the art having the benefit of this disclosure.
Although the technique as disclosed presumes the input samples will previously have been acquired, acquisition of input samples in an ECG context will now be discussed for the sake of completeness.
The ECG system 306 acquires a number of ECG waveforms 320 such as the example waveform 400 in
The processing may be performed manually on, for example, a workstation, by a user through a user interface. So, for example, in the embodiment of
The computing apparatus 503 may be, for instance, a workstation. The computing apparatus 503 includes, in this embodiment, a processor-based resource 506, a memory 509, and a user interface 512. The user interface 512 includes a display 515, one or more peripheral devices such as a keyboard 518 and/or mouse 521, and a software component (“UI”) 524 residing on the memory 509. The UI software component 524 is executed by the processor-based resource 506 to provide a presentation (not separately shown) on the display 515 with which the user 500 may interact using the peripheral component(s).
As those in the art having the benefit of this disclosure will appreciate, the term “processor” is understood in the art to have a definite connotation of structure. A processor may be hardware, software, or some combination of the two. In the illustrated embodiment of
The processor-based resource 506 executes machine executable instructions 527 residing in the memory 509 to perform the functionality of the technique described herein. The instructions 527 may be embedded as firmware in the memory 509 or encoded as routines, subroutines, applications, etc. The memory 509 is a computer-readable non-transitory storage medium and may be local or remote. The memory 509 may be distributed, for example, across a computing cloud. The memory 509 may include Read-Only Memory (“ROM”), Random Access Memory (“RAM”), or a combination of the two. The memory 509 will typically be installed memory but may be removable. The memory 509 may be primary storage, secondary, tertiary storage, or some combination thereof implemented using electromagnetic, optical, or solid-state technologies. Accordingly, the memory 509 may be in various embodiments, a part of a mass storage device, a hard disk drive, a solid-state drive, an external drive (whether disk or solid-state), an optical disk, a magnetic disk, a portable external drive, a jump drive, etc.
Accordingly, in the illustrated embodiment, the computing apparatus 503 performs the computer-implemented functionality of the presently disclosed technique. More particularly, the processor-based resource 506 executes the instructions 527, both shown in
For example, in some embodiments the computing apparatus 503 may be some other kind of personal computing apparatus and may access data over a network such as a Local Area Network (“LAN”). In other embodiments, the computing apparatus 503 may be a computing system. In such a computing system, the processing and memory resources may be distributed across a cloud, for example, accessed from a workstation or other personal computing device over a public or private network such as the Internet. Still other variations in the computing apparatus 503 may be realized by those skilled in the art having the benefit of this disclosure.
Still referring to
Also residing in the memory 509 are an autoencoder 540 and a deep neural network 545. The autoencoder 540 will be trained using unsupervised learning with the unlabeled samples 536 in a manner to be described more fully below. The deep neural network 545 will be trained using the smart annotation technique in a manner also to be described more fully below. Note that there is no requirement that the input samples 530, autoencoder 540, and deep neural network 545 reside in the same memory device or on the same computing resource as the instructions 527. As discussed above, the computing apparatus 503 may be implemented as a distributed computing system. Accordingly, the input samples 530, autoencoder 540, deep neural network 545, and instructions 527 may reside in different memory devices and/or in different computing resources in some embodiments.
Turning now specifically to the autoencoder 540, an autoencoder is a specific type of neural network that can be used to learn a compressed representation of raw data. An autoencoder includes an “encoder” 541 and a “decoder” 542. The encoder 541 compresses the input and the decoder 542 attempts to recreate the input from the compressed version provided by the encoder. This is sometimes referred to as “feature extraction”. The compressed input produced by the encoder 541 is sometimes called “the code” of the autoencoder 540. After training, the encoder 541 is saved and the decoder 542 may be discarded.
The autoencoder 600 includes an encoder 603 and a decoder 606. The encoder 603 comprises a plurality of modules 609a-609k and the decoder 606 comprises a plurality of modules 612a-612k. In the context of this disclosure, a “module” is an identifiable piece of software that performs a particular function when executed. Each of the modules 609a-609k and 612a-612k is identified using a nomenclature known to the art and performs a function known to the art. One skilled in the art will therefore be able to readily implement the autoencoder 600 from the disclosure herein. As noted above, both the encoder 603 and the decoder 606 are used in training the autoencoder 600 through unsupervised training after which the encoder 603 may be discarded.
The term “module” is used herein in its accustomed meaning to those in the art. A module may be, for example, an identifiable piece of executable code residing in memory that, when executed, performs a specific functionality. Depending on the embodiment (or the context in which that term is used), a module may also, or alternatively, be an identifiable piece of hardware and/or identifiable combination of software and hardware that perform a specific functionality. The software and/or hardware of the module may be dedicated to a single use or may be utilized for multiple uses. Other established meanings may be realized by those in the art having the benefit of this disclosure.
Returning now to
The deep neural network 650 is, in this particular embodiment, a convolutional neural network (“CNN”). Note that Resnet is one particular type of CNN. The deep neural network 650 comprises a plurality of modules 660a-660m. Again, in the context of this disclosure, a “module” is an identifiable piece of software that performs a particular function when executed. Each of the modules 660a-660m is identified using a nomenclature known to the art and performs a function known to the art. One skilled in the art will therefore be able to readily implement the deep neural network 650 from the disclosure herein.
Returning again to
Referring now collectively to
The unlabeled samples 536 used in training (at 710) the autoencoder 540 are, in the illustrated embodiments, individual “beats” such as the input samples 322 shown in
The method 700 then continues by training (at 720) the deep neural network 545 through supervised learning using the trained (at 710) autoencoder 540. The deep neural network 545 may be, for example, a convolutional neural network. In general, training the deep neural network 545 iteratively performs a process on a set of inputs that includes a feature extraction on the set of inputs and the classifies the inputs based on the extracted features. The neural network 545, in the method 700, uses the trained autoencoder 540 for the feature extraction.
More particularly, in accordance with the presently disclosed technique, the method 700 iteratively trains (at 740) the deep neural network with a plurality of successive subsets of manually labeled samples 533 drawn from the unlabeled samples 536 until convergence or until the unlabeled sample inputs 536 are exhausted. Each successive subset comprises a plurality of selected, distanced, unlabeled samples with the least confidence from the remaining unlabeled samples to which labels are propagated. Again, the distance determination includes using the autoencoder 540 for feature extraction.
The training (at 820) of the deep neural network first shown in
Once the first subset of labeled samples is obtained (at 830), the training (at 820,
The predetermined number of the second subset of labeled samples produced by the manual labeling (at 850) should exceed a threshold number. This threshold number may be achieved in a number of ways. For instance, the number of unlabeled samples initially selected for manual labeling (at 850) may be of some statistically ascertained number such that, even after unlabeled samples are discarded, the remaining labeled samples are sufficient in number to exceed the threshold. Or, if the discards take the number of labeled samples in the second set below the threshold, then additional unlabeled samples may be selected and processed. Or some combination of these approaches may be used. Still other approaches may become apparent to those skilled in the art having the benefit of this disclosure.
Returning to
The method 900 is computer-implemented. For present purposes, the discussion will assume the method is being implemented on the computing apparatus 503 of
The method 900 is performed, in this particular embodiment, by the processor-based resource 506 through the execution of the instructions 527. The method 900 may be invoked by the user 500 through the user interface 512. More particularly, the user 500 may invoke the method 900 using a peripheral input device, such as the keyboard 518 or the mouse 521, to interact with a user interface, such as a graphical user interface (“GUI”), presented by the UI 524.
Referring now to
The method 900 then trains (at 910) a deep neural network with the unlabeled samples 536 to develop the autoencoder 540. As the samples are unlabeled, this training (at 910) is unsupervised training. As discussed above, the autoencoder 540 includes the encoder 541 and the decoder 542. The encoder 541 may be used only in training (at 910) of the autoencoder 540 and may be discarded afterward.
The method 900 then receives (at 920) a plurality of manual labels for a first predetermined number of randomly selected, unlabeled samples 536. More particularly, the user 500 randomly selects a number N of samples from among the unlabeled samples 536 through the user interface 512 and manually labels them. In the illustrated embodiment, N=56, but may have other values. For example, in other embodiments N may be 40 or 80. The processor-based resource 506 receives the manual labels from the user 500 for the randomly selected, unlabeled samples 536 to create the labeled samples 533. Note that, as samples are selected and labeled, they are removed from the pool of unlabeled samples 536 to join the pool of labeled samples 533.
The manually labeled, randomly selected samples are then “augmented” (at 930). Augmentation is a process known to the art by which distortion or “noise” with relatively little variation is intentionally introduced to the samples to avoid a phenomenon known as “overfitting”. In this context, overfitting describes a condition in which the training so tailors the deep neural network to the samples on which it is trained that it impairs the deep neural network's ability to accurately classify other samples on which it has not been trained. This is an optional step and may be omitted in some embodiments.
Those in the art having the benefit of this disclosure will appreciate that augmenting the samples is but one way to condition the samples. There are other techniques known to the art by which the samples may be conditioned. Other embodiments may therefore also use other techniques instead of, or in addition to, augmentation to mitigate overfitting and address other issues that may be encountered from unconditioned samples.
The method 900 then trains (at 940) the deep neural network 545 with the augmented, manually labeled, randomly selected samples 533. As mentioned above, the deep neural network 545 is a convolutional neural network although alternative embodiments may employ other kinds of deep neural networks. The training is an example of supervised training since the samples being used are labeled.
The method 900 then applies (at 950) the trained deep neural network 545 to the remaining unlabeled samples 536. This application results in additional unsupervised training for the deep neural network 545 since the unlabeled samples 536 are unlabeled. Thus, the presently disclosed technique may be referred to as a hybrid supervised-unsupervised learning technique. Likewise, the deep neural network 545, at the conclusion of training, may be referred to as hybrid supervised-unsupervised trained.
Next, the method 900 continues by selecting (at 960) a second predetermined number of selections of the remaining unlabeled samples 536. In this embodiment, the second predetermined number is equal to the first predetermined number (at 920), but other embodiments may use a second predetermined number that differs from the first predetermined number. Although some aspects of the selection (at 960) may be performed manually in some embodiments, the selecting (at 960) is performed in an automated fashion in this embodiment. The selecting (at 960) is performed by the processor-based resource 506 executing the instructions 527.
As shown in
The “least confidence” unlabeled samples 536 that have been identified (at 961) are then filtered (at 962). As shown in
If (at 964) an identified candidate is too close to the immediately prior identified candidate, the identified candidate may be discarded. A replacement candidate for the discarded candidate may then be identified (at 965). This process of identifying (at 963), discarding (at 964), and replacing (at 965) may be iterated (at 966) until the second predetermined number of unlabeled samples has been identified and filtered.
Note that alternative embodiments may select (at 960) the second predetermined number of selections of the remaining unlabeled samples 536 differently than has been described above relative to
Returning to
The method 900 then iterates (at 980) until either convergence or the remaining unlabeled samples are exhausted. More particularly, the method 900 first checks (at 982) whether convergence has been reached. In the illustrated embodiment, convergence is reached when accuracy changes less than 0.25% in five iterations. If convergence has been reached, the method 900 ends (at 984).
If there is no convergence (at 982), the method 900 checks (at 986) to see if the unlabeled samples 536 have been exhausted. Recall that each iteration removes samples from the set of unlabeled samples 536 by labeling them and, so, the set of unlabeled samples 536 may be exhausted in some embodiments by a sufficient number of iterations. If the unlabeled samples 536 are exhausted (at 986), the method 900 ends (at 984). If there are additional unlabeled samples 536 remaining (at 986), execution flow returns to the selection (at 960) of unlabeled samples 536.
The efficacy of the technique disclosed herein is illustrated in
A deep neural network trained in the manner described herein, such as the deep neural network 545 shown in
The foregoing outlines the features of several embodiments so that those of ordinary skill in the art may better understand various aspects of the present disclosure. Those of ordinary skill in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of various embodiments introduced herein. Those of ordinary skill in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure.
Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter of the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing at least some of the claims.
Various operations of embodiments are provided herein. The order in which some or all of the operations are described should not be construed to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated having the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein. Also, it will be understood that not all operations are necessary in some embodiments.
It will be appreciated that layers, features, elements, etc., depicted herein are illustrated with particular dimensions relative to one another, such as structural dimensions or orientations, for example, for purposes of simplicity and ease of understanding and that actual dimensions of the same differ substantially from that illustrated herein, in some embodiments. Moreover, “exemplary” is used herein to mean serving as an example, instance, illustration, etc., and not necessarily as advantageous. As used in this application, “or” is intended to mean an inclusive “or” rather than an exclusive “or”. In addition, “a” and “an” as used in this application and the appended claims are generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Also, at least one of A and B and/or the like generally means A or B or both A and B. Furthermore, to the extent that “includes”, “having”, “has”, “with”, or variants thereof are used, such terms are intended to be inclusive in a manner similar to the term “comprising”. Also, unless specified otherwise, “first,” “second.” or the like are not intended to imply a temporal aspect, a spatial aspect, an ordering, etc. Rather, such terms are merely used as identifiers, names, etc. for features, elements, items, etc. For example, a first element and a second element generally correspond to element A and element B or two different or two identical elements or the same element.
This concludes the detailed description. The particular examples disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular examples disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below.
The priority and earlier effective filing date of U.S. Application Ser. No. 63/354,541, filed Jun. 22, 2022, is hereby claimed for all purposes, including the right of priority. This related application is also hereby incorporated by reference for all purposes as if expressly set forth verbatim herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2023/056432 | 6/21/2023 | WO |
Number | Date | Country | |
---|---|---|---|
63354541 | Jun 2022 | US |