This U.S. patent application claims priority under 35 U.S.C. § 119 to: Indian Patent Application number 202321089711, filed on Dec. 29, 2023. The entire contents of the aforementioned application are incorporated herein by reference.
The disclosure herein generally relates to the field of image processing, and, more particularly, to a method and system for multimodal image super-resolution using convolutional dictionary learning.
Image super-resolution (ISR) refers to enhancing pixel-based image resolution by minimizing visual artifacts. Generating a high-resolution (HR) version of a low-resolution (LR) image is a complex task that involves inferring missing pixel values, which makes ISR a challenging and ill-posed problem. Different approaches have been proposed to regularize this ill-posed problem by incorporating prior knowledge, such as natural priors, local and non-local similarity, features based on dictionary learning and deep learning. But all these methods primarily focus on single-modality images without exploiting the information available from other imaging modalities that can be utilized as guidance for ISR. In many practical applications with multi-modal imaging systems in place, the same scene is captured by different imaging modalities. Remote sensing for earth observation is one such application where panchromatic, multi-spectral images are acquired at different resolutions to manage the cost, bandwidth, and complexity. This drives the need for Multi-modal Image Super-Resolution (MISR) techniques that can enhance the LR images of the target modality using the information from HR images of other modalities (referred as guidance modality) that share salient features like boundaries, textures, edges, etc. Depth map super-resolution with guidance from RGB, medical image super-resolution using multi-modal Magnetic Resonance (MR) and cross-modal Computed Tomography (CT) and MR image are some other applications where MISR is required.
Existing works in the field of MISR include compressive sensing, Guided image Filtering (GF), Joint Bilateral Filtering (JBF), Joint image Restoration (JR), Deep Joint image Filtering (DJF), Coupled Dictionary Learning (Coupled DL), Joint Coupled Transform Learning (JCTL) and Joint Coupled Deep Transform Learning (JCDTL), and the recent Multi-modal Convolutional Dictionary Learning technique (MCDL). All these techniques differ in the way they transfer the structural details of the guidance image to enhance the LR image of target modality. Their performance depends on how well they are able to identify and model the complex dependencies between the two imaging modalities. Sparse representation learning using Dictionary Learning (DL) has gained popularity for addressing inverse problems including MISR but it has certain constraints. One notable limitation is that the learned dictionaries are inherently not translation invariant, i.e., the basis elements often appear as shifted versions of one another. Additionally, as these dictionaries operate on individual patches rather than the entire image, they reconstruct and sparsify patches independently, due to which the underlying structure of the signal may get lost. Both these limitations impact quality of the reconstructed image. Convolutional Dictionary Learning (CDL), on the other hand, employs translation-invariant dictionaries, called convolutional dictionaries, which can effectively mitigate these limitations. These dictionaries are well-suited for representing signals that exhibit translation invariance, such as natural images and sounds, making them applicable to a range of image processing and computer vision tasks.
Some of the works in literature have attempted usage of CDL for MISR but they have certain drawbacks. One such work discloses an image super-resolution analysis method based on multi-modal convolution sparse coding network which comprises of learning 5 modules (1 multi-modal convolutional sparse encoder, 2 side information encoders and 2 convolutional decoders) to generate a HR image of target modality. This network has a large number of parameters that has to be trained and therefore requires huge amount of training data and longer training time.
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a method for multimodal image super-resolution using convolutional dictionary learning is provided. The method includes obtaining a plurality of training images comprising a set of low-resolution images ‘X’ of a target modality, a set of high-resolution images ‘Y’ of a guidance modality, and a set of high-resolution images ‘Z’ of the target modality. Further, the method includes initializing a plurality of dictionaries and associated plurality of sparse coefficients. The plurality of dictionaries comprise i) a first convolutional dictionary ‘S’ associated with a first set of sparse coefficients ‘A’ among the plurality of sparse coefficients, ii) a second convolutional dictionary ‘G’ associated with a second set of sparse coefficients ‘B’ among the plurality of sparse coefficients, iii) a first coupling convolutional dictionary ‘W’, and iv) a second coupling convolutional dictionary ‘V’. Furthermore, the method includes jointly training the initialized plurality of dictionaries and the associated plurality of sparse coefficients using the plurality of training images by performing a plurality of steps iteratively until convergence of an objective function is achieved. The plurality of steps comprise training the plurality of sparse coefficients by keeping the plurality of dictionaries fixed; and training the plurality of dictionaries by keeping the plurality of sparse coefficients fixed. The trained plurality of dictionaries and the associated plurality of sparse coefficients obtained upon achieving the convergence of the objective function are used for performing a multimodal image super resolution.
In another aspect, a system for multimodal image super-resolution using convolutional dictionary learning is provided. The system includes: a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to obtain a plurality of training images comprising a set of low-resolution images ‘X’ of a target modality, a set of high-resolution images ‘Y’ of a guidance modality, and a set of high-resolution images ‘Z’ of the target modality. Further, the one or more hardware processors are configured by the instructions to initialize a plurality of dictionaries and associated plurality of sparse coefficients. The plurality of dictionaries comprise i) a first convolutional dictionary ‘S’ associated with a first set of sparse coefficients ‘A’ among the plurality of sparse coefficients, ii) a second convolutional dictionary ‘G’ associated with a second set of sparse coefficients ‘B’ among the plurality of sparse coefficients, iii) a first coupling convolutional dictionary ‘W’, and iv) a second coupling convolutional dictionary ‘V’. Furthermore, the one or more hardware processors are configured by the instructions to jointly train the initialized plurality of dictionaries and the associated plurality of sparse coefficients using the plurality of training images by performing a plurality of steps iteratively until convergence of an objective function is achieved. The plurality of steps comprise training the plurality of sparse coefficients by keeping the plurality of dictionaries fixed; and training the plurality of dictionaries by keeping the plurality of sparse coefficients fixed. The trained plurality of dictionaries and the associated plurality of sparse coefficients obtained upon achieving the convergence of the objective function are used for performing a multimodal image super resolution.
In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause a method for multimodal image super-resolution using convolutional dictionary learning. The method includes obtaining a plurality of training images comprising a set of low-resolution images ‘X’ of a target modality, a set of high-resolution images ‘Y’ of a guidance modality, and a set of high-resolution images ‘Z’ of the target modality. Further, the method includes initializing a plurality of dictionaries and associated plurality of sparse coefficients. The plurality of dictionaries comprise i) a first convolutional dictionary ‘S’ associated with a first set of sparse coefficients ‘A’ among the plurality of sparse coefficients, ii) a second convolutional dictionary ‘G’ associated with a second set of sparse coefficients ‘B’ among the plurality of sparse coefficients, iii) a first coupling convolutional dictionary ‘W’, and iv) a second coupling convolutional dictionary ‘V’. Furthermore, the method includes jointly training the initialized plurality of dictionaries and the associated plurality of sparse coefficients using the plurality of training images by performing a plurality of steps iteratively until convergence of an objective function is achieved. The plurality of steps comprise training the plurality of sparse coefficients by keeping the plurality of dictionaries fixed; and training the plurality of dictionaries by keeping the plurality of sparse coefficients fixed. The trained plurality of dictionaries and the associated plurality of sparse coefficients obtained upon achieving the convergence of the objective function are used for performing a multimodal image super resolution.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
In multi-modal imaging systems, different modalities often capture the image from the same scene. While different sensors contain unique features, they still share some common features, for example, edges, texture, and shapes, that can be leveraged for super-resolution tasks. The objective of Multi-modal Image Super resolution (MISR) is to reconstruct a High Resolution (HR) image z of target modality from a Low Resolution (LR) image x of target modality with the guidance of HR image y from another modality, referred as guidance modality, by modelling the cross-modal dependencies between the different modalities. The existing techniques of MISR differ in the way they transfer the structural details of the guidance image to enhance the LR image of target modality. Their performance depends on how well they are able to identify and model the complex dependencies between the two imaging modalities. To handle the disparities between the guidance and target modality much better, learning-based methods like deep learning, and sparse representation learning employing dictionaries and transforms are more popular. In general, deep learning methods require more training data and massive compute resources for good reconstruction. Also, they lack interpretability and the trained models are not guaranteed to enforce measurement consistency between the inputs and the output during testing. In contrast, sparse representation learning-based methods do not suffer from these drawbacks and offer an improved performance compared to deep learning techniques, especially with limited training data.
However, sparse representation learning based techniques also have certain constraints. The learned dictionaries are inherently not translation invariant. Additionally, as these dictionaries operate on individual patches rather than the entire image, they reconstruct and sparsify patches independently, due to which the underlying structure of the signal may get lost. Both these limitations impact the quality of the reconstructed image. Thus, the present disclosure provides a method and system for multi-modal image super resolution using convolutional dictionaries which are translation invariant. The convolutional dictionaries consists of a set of M filters and associated sparse coefficients that are learned to capture features (such as edges, corners, color gradients, boundaries etc.) of input images from different modalities. As understood by a person skilled in the art, in convolutional dictionary learning, a signal (i.e. image, in the context of present disclosure) {xi}i=1N with N measurements each of n dimension is reconstructed using a set of M linear filters {dm}m=1M and associated set of coefficients {am,i}m=1M by optimizing equation 1.
In equation 1, * denotes convolution operation, l1 norm on am,i is used to enforce sparsity and l2 norm constraint on dm is employed to compensate for scaling ambiguity between dictionary atoms and the coefficients. Defining Dm as a linear operator such that Dmam,i=dm*am,i and taking D=(D1, . . . , Dm), X=(x1, . . . , xN) and
equation 1 is rewritten as equation 2.
The optimization problem in equation 2 is not jointly convex in both the dictionary filters and the coefficients. Hence, Alternating Minimization (AM) technique is employed to estimate them. While the convolutional sparse coefficients can be solved by Alternating Direction Method of Multipliers (ADMM) in the DFT domain, the convolutional dictionaries can be solved using the Convolutional Constrained Method of Optimal Directions (CCMOD).
Embodiments of present disclosure provide a method and system for multi-modal image super resolution using convolutional dictionaries. Initially a plurality of training images are obtained. The plurality of training images comprise a set of low-resolution (LR) images ‘X’ of a target modality, a set of high-resolution (HR) images ‘Y’ of a guidance modality, and a set of high-resolution (HR) images ‘Z’ of the target modality. Then, a plurality of dictionaries and associated plurality of sparse coefficients are initialized. The plurality of dictionaries comprise i) a first convolutional dictionary ‘S’ associated with a first set of sparse coefficients ‘A’ among the plurality of sparse coefficients. The first convolutional dictionary comprises M filters to extract features from the set of low-resolution images of the target modality; ii) a second convolutional dictionary ‘G’ associated with a second set of sparse coefficients ‘B’ among the plurality of sparse coefficients. The second convolutional dictionary comprises M filters to extract features from the set of high-resolution images of the guidance modality; iii) a first coupling convolutional dictionary ‘W’; and iv) a second coupling convolutional dictionary ‘V’. The first and second coupling convolutional dictionaries model the relationship between the target and guidance modalities. Initializing the dictionaries and coefficients provides a starting point for the training process. The initialized plurality of dictionaries and associated plurality of sparse coefficients are jointly trained (updated) iteratively using the plurality of training images until convergence of an objective function is achieved. At each iteration, the sparse coefficients are trained by keeping dictionaries fixed and then the dictionaries are trained by keeping sparse coefficients fixed. The trained plurality of dictionaries and the associated plurality of sparse coefficients obtained upon achieving the convergence of the objective function are used for performing multimodal image super resolution.
When a new low-resolution image of the target modality and a high resolution image of guidance modality are obtained, the plurality of sparse coefficients are computed based on the trained plurality of dictionaries and the obtained images. A first set of coefficients A is computed based on the trained first convolutional dictionary S and the low-resolution image of the target modality by using a standard convolutional sparse coding update. Similarly, a second set of coefficients B is computed based on the trained second convolutional dictionary S and the low-resolution image of the target modality by using a standard convolutional sparse coding update. Then, a high resolution image Ztest of the target modality is generated using the trained first coupling convolutional dictionary, the trained second coupling convolutional dictionary, the first set of test coefficients, and the second set of test coefficients as Ztest=WA+VB.
Referring now to the drawings, and more particularly to
The I/O interface device(s) (106) can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) (106) receives low resolution image of target modality and high resolution image of guidance modality as input and provides high resolution image of target modality as output. The memory (102) may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. Functions of the components of system 100 are explained in conjunction with flow diagram depicted in
In an embodiment, the system 100 comprises one or more data storage devices or the memory (102) operatively coupled to the processor(s) (104) and is configured to store instructions for execution of steps of the method (200) depicted in
Once the plurality of dictionaries and associated plurality of sparse coefficients are initialized, at step 206 of the method 200, the one or more hardware processors are configured to jointly train (or update) the initialized plurality of dictionaries and the associated plurality of sparse coefficients using the plurality of training images by performing steps 206A and 206B iteratively until convergence of an objective function is achieved. The objective function is given by equation 4. Convergence of the objective function is achieved when the loss given by equation 4 does not change significantly over the subsequent iterations, with the absolute value being less than an empirically calculated threshold value. The trained plurality of dictionaries and the associated plurality of sparse coefficients obtained upon achieving the convergence of the objective function are used for performing a multimodal image super resolution.
The equation 4 is derived from equation 5 by using block structured notations for dictionaries, coefficients as defined in equation 2 and considering Sm, Gm, Vm, Wm as linear operators such that Smam,i=Sm*am,i, Gmbm,i=gm*bm,i, Wmam,i=Wm*am,i and Vmbm,i=vm*bm,i.
The first two terms in equations 4 and 5 ensure that each of the dictionary filters and coefficients are learnt in such a way that they reconstruct the images of respective modality well. The third term defines coupling between coefficients of different image modalities to reconstruct the HR image of target modality. The remaining terms constrain the learned coefficients to be sparse.
At step 206A, the plurality of sparse coefficients are trained (alternatively referred as learnt or updated) by keeping the plurality of dictionaries fixed. The first set of sparse coefficients are updated based on the set of LR images of the target modality, the set of HR images of the target modality, the first convolutional dictionary, the first coupling convolutional dictionary, the second coupling convolutional dictionary, and the second set of sparse coefficients by solving equation 6 using Alternating Direction Method of Multipliers (ADMM) technique. The second set of sparse coefficients are updated based on the set of HR images of guidance modality, the set of HR images of the target modality, the second convolutional dictionary, the first coupling convolutional dictionary, the second coupling convolutional dictionary, and the first set of sparse coefficients by solving equation 7 using ADMM technique.
Applying ADMM to equation 6 using variable splitting by introducing an auxiliary variable Pa constrained to be equal to the primary variable A results in equation 8.
With Ua as dual variable, the ADMM iterations are given by equations 9 to 11, where p controls convergence rate.
Taking Qa=Pat−Uat, the solution of A is obtained by taking derivative of equation 9 with respect to A and equating it to zero which results in equation 12.
Applying Discrete Fourier Transform (DFT) to equation 12 results in equation 13 where {circumflex over ( )} indicates DFT transform of respective parameters.
The solution to equation 13 is obtained using iterated Sherman-Morrison algorithm. The solution to equation 10 is obtained by soft-thresholding similar to the work in [Fangyuan Gao et. al., “Multi-modal convolutional dictionary learning,” IEEE Transactions on Image Processing, vol. 31, pp. 1325-1339, 2022] and equation 11 is solved by simple arithmetic operations. Similarly, solution to equation 7 is given by equation 14, where Qb=Pbt−Ubt.
The auxiliary variable Pb and dual variable Ub associated with the second set of coefficients B follow standard updates similar to equations 10 and 11.
Once the plurality of sparse coefficients are updated, at step 206B, the plurality of dictionaries are trained by keeping the plurality of sparse coefficients fixed. The first convolutional dictionary is updated based on the set of LR images of the target modality using Convolutional Constrained Method of Optimal Directions (CCMOD) technique. The second convolutional dictionary is updated based on the set of HR images of the guidance modality using the CCMOD technique. The first coupling convolutional dictionary is updated based on the set of HR images of the target modality, the first set of sparse coefficients, the second set of sparse coefficients and the second coupling convolutional dictionary by converting into a standard Convolutional Dictionary Learning (CDL) problem by minimizing ½∥X′−WA∥22, wherein X′=Z−VB. Similarly, the second coupling convolutional dictionary is updated based on the set of HR images of the target modality, the first set of sparse coefficients, the second set of sparse coefficients and the first coupling convolutional dictionary by converting into a standard CDL problem by minimizing ½∥Y′−VB∥22, wherein Y′=Z−WA. The plurality of dictionaries obtained after training are used to perform MISR.
The plurality of dictionaries obtained upon achieving the convergence of the objective function are used for performing multimodal image super resolution on a new low resolution image Xtest of the target modality by using a HR image Ytest of the guidance modality. Initially a first set of test coefficients Atest is computed based on the learned first convolutional dictionary S and the LR image of the target modality Xtest following a standard convolutional sparse coding update by solving equation 15.
Similarly, a second set of test coefficients Btest is computed based on the learned second convolutional dictionary G and the HR image of the guidance modality Ytest following the standard convolutional sparse coding update by solving equation 16.
Finally, a high resolution image Ztest of the target modality is generated using the learned first coupling convolutional dictionary W, the second coupling convolutional dictionary V, the first set of test coefficients, and the second set of test coefficients as Ztest=WAtest+VBtest.
Data Description: Datasets used for evaluating the performance of the method 200 are: (1) RGB-Multispectral dataset [Ayan Chakrabarti and Todd Zickler, “Statistics of real-world hyperspectral images,” in CVPR 2011. IEEE, 2011, pp. 193-200.] and (2) RGB-NIR dataset [Matthew Brown and Sabine Susstrunk, “Multi-spectral sift for scene category recognition,” in CVPR 2011. IEEE, 2011, pp. 177-184.]. In both the datasets, RGB image is considered as the guidance modality. Multispectral image is considered as the target modality in the first dataset and NIR (Near InfraRed) image is considered as the target modality in the second dataset. The two datasets contain the HR images of both the guidance (Y) and target modalities (Z). For experimentation purpose, LR image of target modality is generated by downsizing the HR image by a required factor and then applying bicubic interpolation on the down sampled image to upscale by the same factor. A factor of 4 is considered for down sampling both RGB/Multispectral and RGB/NIR for a fair comparison with benchmark techniques. Here, the RGB image used as guidance modality is converted to grayscale. Also, the multispectral image at 640 nm is considered in the experiments for the RGB-Multispectral dataset.
Benchmark Methods: The method 200 is compared with eight state-of-the-art MISR techniques including:
Results: The reconstruction quality of the HR image of the target modality is assessed using Structural SIMilarity (SSIM) and Peak Signal to Noise Ratio (PSNR) metrics. The benchmark methods are trained on 31 image pairs for RGB/NIR and 35 image pairs for RGB/Multispectral. Each 512×512 image is divided into non-overlapping patches of size 16×16. During testing, the patches of the test image are reconstructed individually and combined to create the full image. On the other hand, the techniques using convolutional dictionary learning (method 200 and MCDL) consider non-overlapping patches of size 256×256 for training, and testing is conducted on the full image. The method 200 is trained only on 10 image pairs for each dataset instead of 31 and 35 image pairs, respectively, considered for benchmark methods. The hyperparameters associated with all the techniques are tuned using grid search. The method 200 is implemented using the SPORCO library in MATLAB. The results for method 200 and MCDL are reported using 4 filters of size 8×8 that gave optimal performance for both the datasets.
Tables 1 and 2 summarize the MISR results for the first and second datasets, respectively, obtained with 5 test image pairs. It can be observed that the joint learning-based approaches (Coupled DL, JCTL, JCDTL, and the method 200) display superior performance compared to other filtering (DJF, JR, GF, and JBF) and two-stage (MCDL) based approaches for most of the images. This is because joint learning enables effective learning of discriminative and common features or representations from each modality (LR image of target and HR image of guidance) that assists in improved reconstruction of HR image of target modality. Among the joint learning methods, the method 200 shows improved reconstruction compared to other benchmark methods for most of the images, despite training with limited data. This demonstrates the potential of using shift-invariant dictionaries, i.e., convolutional dictionaries, in learning representation for robust and effective modelling for MISR. It is important to note that compared to the two-stage CDL-based MCDL approach that requires learning 6 convolutional dictionaries and 3 associated sparse coefficients (1 common and 2 unique for the respective modality), method 200 requires learning only 4 convolutional dictionaries and 2 associated sparse coefficients. Thus, even with reduced complexity, the disclosed method provides improved performance over the MCDL approach as shown in tables 1 and 2. The convergence plot of the method 200 is given in
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
Number | Date | Country | Kind |
---|---|---|---|
202321089711 | Dec 2023 | IN | national |