There is a growing need to provide fast methods for generating high accuracy MRI images.
The following list includes some articles. This list does not indicate that any of the references is relevant.
There may be provided systems, methods, and computer readable medium as illustrated in the specification.
The embodiments of the disclosure will be understood and appreciated more fully from the following detailed description, taken in conjunction with the drawings in which the various figures illustrate examples of processes, systems, encoding, decoding and results.
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention.
The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings.
It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
Because the illustrated embodiments of the present invention may for the most part, be implemented using electronic components and circuits known to those skilled in the art, details will not be explained in any greater extent than that considered necessary as illustrated above, for the understanding and appreciation of the underlying concepts of the present invention and in order not to obfuscate or distract from the teachings of the present invention.
Any reference in the specification to a method should be applied mutatis mutandis to a device or system capable of executing the method and/or to a non-transitory computer readable medium that stores instructions for executing the method.
Any reference in the specification to a system or device should be applied mutatis mutandis to a method that may be executed by the system, and/or may be applied mutatis mutandis to non-transitory computer readable medium that stores instructions executable by the system.
Any reference in the specification to a non-transitory computer readable medium should be applied mutatis mutandis to a device or system capable of executing instructions stored in the non-transitory computer readable medium and/or may be applied mutatis mutandis to a method for executing the instructions.
Any combination of any module or unit listed in any of the figures, any part of the specification and/or any claims may be provided.
The specification and/or drawings may refer to a processing circuit. The processing circuit may be implemented as a central processing unit (CPU), and/or one or more other integrated circuits such as application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), full-custom integrated circuits, a graphics processing unit (GPU), a neural network processor, etc., or a combination of such integrated circuits.
Any combination of any steps of any method illustrated in the specification and/or drawings may be provided.
Any combination of any subject matter of any of claims may be provided.
Any combinations of systems, units, components, processors, sensors, illustrated in the specification and/or drawings may be provided.
Magnetic resonance imaging (MRI) is a non-invasive imaging modality with a wide range of clinical applications due to its capacity to provide detailed soft-tissue images. The MRI signal is acquired in the Fourier space, called the “k-space”. An inverse Fourier transform (IFT) of the k-space is then applied to generate a meaningful MRI scan in the spatial domain [1]. Acquisition times required to sample the full k-space are a major limiting factor for achieving high spatial and temporal resolutions, reduce motion artifacts, improve patient experience and reduce costs [2].
Partial sampling of the k-space can linearly reduce acquisition times. However, the reconstruction of an MRI image from undersampled k-space data results in an highly ill-posed inverse problem. Naive reconstruction by zero-filling of the missing k-space data and application of the IFT results in a clinically meaningless image due to the presence of various artifacts [2]. Early works concentrated on the properties of the k-space, such as partial Fourier imaging methods that utilize Hermitian symmetry to reduce acquisition times [3].
Classical linear approaches for MRI reconstruction from undersampled data leverage advances in parallel imaging using multiple receiver coils coupled with linear algorithms for reconstruction applied either in the k-space domain [4] or in the spatial domain [5]. Nevertheless, the theoretical acceleration factor is bounded by the number of available coils [6]. The practical acceleration factor is further limited due to noise amplification resulted from the matrix inversion [5].
The non-linear compressed sensing (CS) [7] approach aims to reconstruct a high-quality image from undersampled k-space data by constraining the associated ill-posed inverse problem with a sparsity prior by means of a sparsifying linear transform. While the CS objective function does not have a closed form solution, it is a convex problem, and thus, can be solved using numerous iterative algorithms [6].
During the past few years, a plethora of deep-neural-networks (DNN) based methods were proposed for undersampled MRI reconstruction with substantial gains in both image quality and acceleration factors [8]. Similar to the classical methods, DNN-based methods can be applied in both the spatial domain and in the k-space domain. The KIKI-net, for example, alternates between the image domain (I-CNN) and k-space (K-CNN) iteratively, where a data consistency constraint is enforced in an interleaving manner [9]. The more recent End-to-End Variational Network (E2E-VarNet) simultaneously estimates coil-specific sensitivity maps and predicts the fully-sampled k-space from the undersampled k-space data through a series of cascades [10].
Despite the promising performance of currently available DNN methods, Antun et al. [11] and Jalal et al. [12] demonstrated that, unlike their classical counterparts, DNN-based methods are unstable against the presence of variations in the acquisition process and the anatomical distribution. Examples of such variations include using different undersampling mask or acceleration factor during inference compared to those used during training, and the presence of small pathologies or different anatomies compared to the data used for training.
Preliminary works aimed to address the stability gap in DNN-based MRI reconstruction through data augmentation techniques. Specifically, Liu et al. improves overall reconstruction performance and robustness against sampling pattern discrepancies and images acquired at different contrast phases by augmenting the undersampled data with extensively varying undersampling patterns [13]. More recently, Jalal et al. [12] combined DNN-based generative prior with classical CS-based reconstruction and posterior sampling to overcome the stability gap. However, this approach does not directly address the stability gap in DNN-based MRI reconstruction methods. More recently, physics-driven deep learning methods have emerged as a powerful tool to improve DNN-based undersampled MRI reconstruction generalization capacity. Spanning methods that incorporate physics of MRI acquisitions by means of physics-driven loss functions, plug-and-play methods, generative models, and unrolled networks [6]. Specific examples include enforcing k-space consistency directly after image enhancement [14], and adding k-space consistency as an additional cost function term during training [15].
Yet, the stability of DNN-based methods against variations in the acquisition process and the anatomical distribution remains an open question [6]. Further, current DNN methods formulate the ill-posed undersampled MRI reconstruction problem as a regression task, in which the goal is to predict the fully-sampled k-space data or high quality image from the undersampled data, effectively eliminating the sampling mask used during acquisition, from the regression process at inference time. This is in contrast to their classical counterparts which encode the sampling mask as part of the forward model of the system during the optimization process.
The suggested solution addresses the stability gap in undersampled MRI reconstruction with DNN by introducing a physically-primed DNN architecture and training approach. Unlike previous approaches, the suggested solution encodes the undersampling mask in addition to the observed data in the model architecture, and employs an appropriate training approach that uses data generated with various undersampling masks to encourage the model to generalize the undersampled MRI reconstruction problem.
The suggested solution introduces the physically-primed approach for DNN based MRI reconstruction from undersampled “k-space” data.
The suggested solution improves generalization capacity and robustness against variations in the acquisition process and the anatomical distribution.
The suggested solution was tested and experimental evidence for the improved robustness, especially in clinically relevant regions, using the publicly available fastMRI dataset is provided.
The suggested solution improves the generalization capacity of DNN methods for undersampled MRI reconstruction by introducing a physically-primed DNN architecture and training approach. The suggested solution architecture encodes the undersampling mask in addition to the observed data in the model architecture and employs an appropriate training approach that uses data generated with various undersampling masks to encourage the model to generalize the undersampled MRI reconstruction problem.
The suggested solution achieved an enhanced generalization capacity which resulted in significantly improved robustness against variations in the acquisition process and in the anatomical distribution, especially in pathological regions, compared to both vanilla DNN methods and DNN trained with undersampling mask augmentation. Trained models and code to replicate our experiments will become available for research purposes upon acceptance.
The first row includes images (a)-(d) that depicts variation in the acquisition process (different undersampling mask).
The second row includes images (e)-(h) presents variation in the anatomical distribution (train on knee data and inference on brain data).
Images (a) and (e) illustrate a target image from fully-sampled k-space data.
Images (b) and (f) illustrate a reconstruction using a model trained with a fixed sampling mask (Peak Signal To Noise Ratio (PSNR)=24.9/14.07, Structural Similarity index (SSIM)=0.4684/0.1926).
Images (c) and (g) illustrate a reconstruction using a model trained with mask augmentations (PSNR=25.35/16.56, SSIM=0.4969/0.2812).
Images (d) and (h) illustrate a reconstruction using the proposed physically-primed approach (PSNR=26.4/26.05, SSIM=0.5354/0.6359).
The forward model of the undersampled single coil MRI acquisition process is given by:
kus=M∘x+n (1)
Where kus∈N are the observed measurements in the k-space, x∈N is the image representing the underlying anatomy, F is the Fourier operator, M∈RN is a binary undersampling mask, ∘ is element-wise multiplication and n is an additive noise. For the sake of simplicity, we assume n˜N(0, σ2) [16].
Direct reconstruction of the MRI image x from the undersampled data is an ill-posed inverse problem that cannot simply solved using linear approaches. Naive reconstruction by zero-filling of the missing k-space data and application of the IFT will result in aliased image which is clinically meaningless [2].
The non-linear CS approach enables high-quality reconstruction from the undersampled k-space by imposing a sparsity constraint, ψ to regularize the ill-posed inverse problem. The reconstructed image, {circumflex over (x)}, is obtained by solving the following constrained optimization problem:
Where λ is the regularization weight balancing between the data term and the assumed prior. Examples to the sparsifying transform Ψ include the total-variation and the wavelet transforms [7]. Recently, Jalal et al. [12] suggested to replace the sparsifying transform Ψ with a DNN-based generative prior. Various optimization techniques were developed to address the challenging CS optimization problem [17]. Yet, the high computational complexity and limited ability to overcome image quality degradation at high acceleration rates may interfere clinical utilization [17].
Recently, DNN-based methods were applied for MRI reconstruction from undersampled data either on the image domain or on the k-space domain. In the k-space domain, the goal of these methods is to predict the fully sampled k-space data from the given undersampled k-space data. Without loss of generality, the prediction task can be represented as:
=Fθ(kus) (3)
Where Fθ denotes the network function and θ represents the DNN weights. Taking a supervised learning approach, the DNN weights θ are estimated by minimizing the loss between the full k-space data predicted by the DNN, from the undersampled data, kus, to the corresponding ground truth fully sampled k-space data, kfull, as follows:
However, unlike the classical CS approach (Eq. 2), such methods are known to be unstable against the presence of variations in the acquisition process and the anatomical distribution [11; 12].
A key observation is that while the CS approach (Eq. 2) explicitly encode the undersampling mask M as part of the system forward model, DNN-based approaches essentially ignore the undersampling mask during training an inference. Even though augmentation techniques suggest to use undersampled k-space data from different masks during training [13], or in a physically-motivated loss functions [6], yet the undersampling mask information is not explicitly encoded in the DNN architecture and leveraged during inference. This may result in DNN instability compared to the classical CS counterpart.
Methods
Physically-Primed DNN for MRI Reconstruction
The main hypothesis suggested by the inventors is that by explicitly encoding the undersampling mask in the DNN architecture and leveraging this information during inference, the DNN will be more capable to accurately generalize the ill-posed inverse problem associated with MRI reconstruction from undersampled data. Thus, it will be more robust against variations in the acquisition process and anatomical distribution.
There is provided a physically-primed, U-Net [18] based, DNN architecture operating on the k-space domain.
The complex-valued k-space data is represented as a two channel input, corresponding to the real and imaginary parts.
The encoding of the undersampling mask M included adding a 3rd input channel to the DNN. The prediction of the full k-space data from the undersampled k-space data is defined as:
=Fθ(kus,M) (5)
The DNN weights θ are estimated by minimizing the loss between the full k-space data predicted by the DNN, , from the undersampled data, kus and the undersampling mask M, to the corresponding ground truth fully sampled k-space data, kfull, as follows:
The physically-primed DNN model was encouraged to generalize the ill-posed inverse problem of MRI reconstruction from undersampled k-space data by varying the undersampling mask M during training.
Implementation Details
The used models are based on the suggested fastMRI U-Net [2], while modifying the output to be the summation of the k-space input and the network output. This network consists of two deep convolutional networks, an encoder followed by a decoder. The encoder consists of blocks of two 3×3 convolutions, each followed by Rectified Linear Unit (ReLU) activation functions. The input to the encoder is a 3-channel data representing the concatenation of the complex k-space data (2-channels) and the binary undersampling mask (1-channel). The output of each block is down-sampled using a max-pooling layer with stride 2.
The decoder consists of blocks with a similar structure to the encoder, where the output of each block is up-sampled using a bilinear up-sampling layer. The decoder concatenates the two inputs, the up-sampled output of the previous block and the output of the encoder block with the same resolution, to the first convolution in each block. At the end of the encoder there are two 1×1 convolutions that reduce the number of channels to two channels, representing the real and imaginary parts of the k-space data.
In the example illustrated in
Dataset
The inventors used the publicly available fastMRI dataset [2], consisting of raw k-space data of knee and brain volumes. The knee images used in this study were directly obtained from the single coil track of the fastMRI dataset, while the brain images were reconstructed from fully sampled, multi-coil k-space data. Brain images were reconstructed by applying IFT to each individual coil and combining coils images with geometric averaging. The training set consisted of 34742 knee slices. The inventors split the fastMRI knee validation set into validation and test sets since the original fastMRI test set does not allow applying random undersampling. The splitting ratio was 2:1, yielding 5054 slices for validation and 2081 slices for test. Brain images (1000 slices) were used for test purposes only.
To further evaluate the clinical impact of our approach, the inventors also used the bounding box annotations generated by subspecialist experts on the fastMRI knee and brain dataset provided by the fastMRI+ dataset [19]. Each bounding box annotation includes its coordinates and the relevant label for a given pathology on a slice-by-slice level. Pathology annotations were marked in 39% of the training set, 28% of the validation set, and 51% of the test set. These annotations enabled us to examine the clinical relevance of our models performance on pathological regions.
Undersampling Masks
The inventors retrospectively undersampled the fully sampled k-space data by element-wise multiplication with randomly generated binary masks. The acceleration factor (R) was set to four or eight, where the undersampled k-space included 8% or 4% of the central region, respectively. The remaining k-space lines were sampled in three different ways to achieve the desired acceleration factor. The first sampling pattern was an equispaced mask with a fixed offset, meaning that the remaining k-space lines were sampled with equal spacing. The second sampling pattern was an equispaced mask with a varying offset, meaning that the remaining k-space lines were sampled with equal spacing but with random offset from the start. The third sampling pattern was a random mask, meaning that the remaining k-space lines were uniformly sampled.
The first mask (denoted (a)) is an equispaced mask with R=4. The second mask (denoted (b)) is an equispaced mask with R=8. The third mask (denoted (c)) is a random mask with R=4. The fourth mask (denoted (d)) is a random mask with R=8.
Training Settings
The inventors trained three models on the undersampled k-space with R=4. The undersampling pattern of the first model was fixed equispaced (Fixed). The undersampling pattern of the two other models was varying equispaced, while the input of one of them is the two-channels k-space (Baseline), and the input of the second one consists of three channels where the third channel is the mask pattern (Mask). We trained the models on transformed k-space to account for the orders of magnitude difference between the DC coefficient and the rest of the coefficients in the k-space. Specifically we used the following transformation:
kt=log(k+1)·105 (7)
All models were trained using the RMSprop algorithm. The initial learning rates were optimized for each model. An initial learning rate of 0.01 was used for all models. The initial learning rate was multiplied by 0.1 after 40 epochs for all models. The inventors used the L1 norm between the 2 channels fully sampled k-space and the 2 channels reconstructed k-space as the loss function. The models were trained on two Nvidia A100 GPUs, each for 200 epochs. Training time took about 48 hours for each model. The inventors selected the models with the best validation loss for our experiments.
Experimental Methodology
The inventors examined the generalization capability of the networks by testing their performances with varying acquisition conditions and anatomical distributions. Specifically, the inventors evaluated reconstruction performance for: 1) undersampling patterns (equispaced for training and random for test) different from those used during training, but a similar acceleration factor (R=4), 2) different acceleration factor (R=4 for training and R=8 for test), and; 3) different anatomical distribution (knee for training and brain for test).
The inventors used standard evaluation metrics, including the average normalized mean square error (NMSE), Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) to assess the different models performances. To specifically determine the clinical relevance of our approach, the inventors calculated performance metrics separately for the entire images and for clinically relevant regions that include pathologies. The inventors used the clinical regions annotations from the fastMRI+ dataset [19]. The inventors determined statistically significant differences between our approach and a baseline model trained with undersampling mask data augmentation with a paired Student's t-test.
Results
Images (a) and (e) are target images. Images (b) and (f) illustrate a reconstruction of fixed model (PSNR=14.85) (10.92, SSIM=0.2405) (0.364). Images (c) and (g) illustrate a reconstruction of baseline model (PSNR=20.17) (15.53, SSIM=0.4315) (0.5377). Images (d) and (h) illustrate a reconstruction of mask model (PSNR=27.75) (24.67, SSIM=0.7446) (0.7779).
Tables 1 and 2 summarize model performance on the entire image and for the clinically relevant regions, respectively, for the same anatomical distribution (i.e knee data) but with variations in the acquisition process (i.e. different equispaced sampling mask). Similarly, Tables 3 and 4 summarize model performance for random sampling masks in test. In all cases, the physically-primed model (Mask) has significantly better reconstruction accuracy (Paired student's t-test, p<<0.01). The improved accuracy indicates a better generalization and robustness against variations in the acquisition process compared to models trained with and without mask augmentation. It is important to note that the improved performance is evident mostly in the clinically-relevant regions which are critical for clinical diagnosis.
Tables 5 and 6 demonstrate the generalization capacity of the suggested physically-primed model in case of variation in the anatomical distribution, i.e. training on knee data while testing on brain data for the entire image and for the clinically-relevant regions, respectively. The suggested physically-primed model (mask) performed significantly better (Paired student's t-test, p<<0.01) than the models trained with and without mask augmentation.
The improved robustness of the suggested physically-primed model against variations in both the acquisition process and the anatomical distribution, suggest better generalization capacity beyond more common data augmentation techniques.
The inventors introduced a physically-primed DNN architecture to address the stability gap in deep-learning based MRI reconstruction from undersampled k-space data. The previously observed stability gap indicates limited generalization of the associated ill-posed inverse problem compared to classical methods. This stability gap practically impede DNN-based MRI reconstruction from undersampled data in the clinical setting. While previous approaches aimed to address the stability gap through data augmentation and physically motivated loss functions, the suggested physically-primed DNN approach improves the generalization capacity by encoding the forward model, including the undersampling mask, in the network architecture and introducing an appropriate training scheme with samples generated with various undersampling masks. The experiments showed that encoding the undersampling mask as part of the DNN architecture improves generalization capacity, especially in clinically relevant regions, compared to previously proposed data augmentation techniques in multiple scenarios including changing the undersampling mask, modifying the sampling factor, and applying the DNN to reconstruct images acquired from a different anatomical region.
It should be noted that the encoding of the undersampling mask into the DNN architecture can be executed in manners that differ from the illustrated encoding scheme.
It should be noted that the outcome of the method can be evaluated by using quantitative metrics other than PSNR and SSIM.
In conclusion, physically-primed approach has the potential to improve the generalization capacity and robustness of DNN-based methods for MRI reconstruction from undersampled k-space data. Thus, in turn, the inventors approach has the potential to facilitate the utilization of DNN-based MRI reconstruction methods in the clinical setting.
According to an embodiment, method 600 includes step 610 of obtaining an under-sampled frequency domain representation (FDR) of the spatial information. The under-sampled FDR was obtained by sampling an FDR of the spatial information with a sampling mask.
According to an embodiment, the FDR of the spatial information is a Fourier transform representation of the spatial information. The FDR may be generated by means other than applying a Fourier transform.
According to an embodiment step 610 includes at least one of:
According to an embodiment, step 610 is followed by step 620 of feeding the under-sampled FDR and the sampling mask to a machine learning process.
The machine learning process may be one or more neural networks or may differ from one or more neural networks.
According to an embodiment the one or more neural networks include any type of neural network—for example:
According to an embodiment the machine learning process are implemented by one or more processing circuits.
According to an embodiment the one or more integrated circuits belong to the source of the spatial information—for example may belong to an MRI system.
According to an embodiment the one or more integrated circuits are in communication with the source of the spatial information—for example may be in communication with the MRI system—for example over one or more networks of any type, may belong to a remote computer or to a computer that is proximate to the source of the spatial information.
According to an embodiment, step 620 is followed by step 630 of reconstructing the spatial information by the machine learning process.
According to an embodiment, method 600 includes step 605 of obtaining the machine learning process.
Step 605 may include at least one of:
The machine learning process was trained using a training data set that may include training under-sampled FRDs of training spatial information, and one or more training sampling masks that were used to sample FDRs of training spatial information. The one or more training sampling masks may be multiple training sampling masks, and wherein at least two of the training sampling masks differ from each other.
According to an embodiment, at least one training sampling mask of the one or more training sampling masks differs from the sampling mask. Thus—the sampling mask applied during inference differs from the training sample masks.
According to an embodiment, The training spatial information includes multiple training spatial information units, wherein at least one of the training spatial information units may be related to an object that differs from an object related to the spatial information. Thus—the training may be done with images of a certain object while the inference can be done in relation to another object.
According to an embodiment, at least one processing circuit does not belong to the MRI system (may belong to a computerized system that is in communication with the MRI system) and may receive the FDR of the spatial information, and hosts a machine learning process that reconstructs the spatial information. According to an embodiment the FDR of the MRI information is determined using at least one processing circuit that does not belong to the MRI system.
There is provided a non-transitory computer readable medium for reconstructing spatial information, the non-transitory computer readable medium stores instructions for: obtaining an under-sampled frequency domain representation (FDR) of the spatial information, wherein the under-sampled FDR was obtained by sampling an FDR of the spatial information with a sampling mask; feeding the under-sampled FDR and the sampling mask to a machine learning process; and reconstructing the spatial information by the machine learning process. The machine learning process was trained using a training data set that comprises training under-sampled FRDs of training spatial information, and one or more training sampling masks that were used to sample FDRs of training spatial information.
According to an embodiment, the one or more training sampling masks are multiple training sampling masks, and wherein at least two of the training sampling masks differ from each other.
According to an embodiment, the FDR of the spatial information is magnetic resonance imaging (MRI) information.
According to an embodiment, the MRI information is obtained from a single coil of an MRI system.
According to an embodiment, the MRI information is obtained from multiple coils of an MRI system.
According to an embodiment, the FDR of the spatial information is a Fourier transform representation of the spatial information.
According to an embodiment, the machine learning process is implemented by one or more neural networks.
According to an embodiment, the one or more neural networks comprise a UNET.
According to an embodiment, the one or more neural networks comprise a transformer neural network.
According to an embodiment, the one or more neural networks comprise an end-to-end variational network.
According to an embodiment, the one or more neural networks comprise a sampling mask encoder and an under-sampled FDR encoder.
According to an embodiment, the at least one training sampling mask of the one or more training sampling masks differs from the sampling mask.
According to an embodiment, the training spatial information comprises multiple training spatial information units, wherein at least one of the training spatial information units is related to an object that differs from an object related to the spatial information.
According to an embodiment, the non-transitory computer readable medium stores instructions for training the machine learning process using the training data set.
According to an embodiment, the non-transitory computer readable medium stores instructions for testing the machine learning process using a test data set.
According to an embodiment, the non-transitory computer readable medium stores instructions for controlling a generation of the spatial information.
According to an embodiment, the non-transitory computer readable medium stores instructions for controlling a generation of the spatial information by one or more coils of an magnetic resonance imaging (MRI) system.
While the foregoing written description of the invention enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The invention should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope and spirit of the invention as claimed.
In the foregoing specification, the invention has been described with reference to specific examples of embodiments of the invention. It will, however, be evident that various modifications and changes may be made therein without departing from the broader spirit and scope of the invention as set forth in the appended claims.
Those skilled in the art will recognize that the boundaries between logic blocks are merely illustrative and that alternative embodiments may merge logic blocks or circuit elements or impose an alternate decomposition of functionality upon various logic blocks or circuit elements. Thus, it is to be understood that the architectures depicted herein are merely exemplary, and that in fact many other architectures may be implemented which achieve the same functionality.
Any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality may be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality.
Furthermore, those skilled in the art will recognize that boundaries between the above described operations merely illustrative. The multiple operations may be combined into a single operation, a single operation may be distributed in additional operations and operations may be executed at least partially overlapping in time. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments.
Also for example, in one embodiment, the illustrated examples may be implemented as circuitry located on a single integrated circuit or within a same device. Alternatively, the examples may be implemented as any number of separate integrated circuits or separate devices interconnected with each other in a suitable manner.
However, other modifications, variations and alternatives are also possible. The specifications and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word ‘comprising’ does not exclude the presence of other elements or steps then those listed in a claim. Furthermore, the terms “a” or “an,” as used herein, are defined as one or more than one. Also, the use of introductory phrases such as “at least one” and “one or more” in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an.” The same holds true for the use of definite articles. Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The mere fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to advantage.
While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
It is appreciated that various features of the embodiments of the disclosure which are, for clarity, described in the contexts of separate embodiments may also be provided in combination in a single embodiment. Conversely, various features of the embodiments of the disclosure which are, for brevity, described in the context of a single embodiment may also be provided separately or in any suitable sub-combination.
It will be appreciated by persons skilled in the art that the embodiments of the disclosure are not limited by what has been particularly shown and described hereinabove. Rather the scope of the embodiments of the disclosure is defined by the appended claims and equivalents thereof.
This application claims priority from U.S. provisional patent Ser. No. 63/371,398 filing date Aug. 14, 2022 which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63371398 | Aug 2022 | US |