The present disclosure generally relates to spectral imaging; and in particular, to a system and associated method for hyperspectral unmixing to quantify material presence in mixtures.
Hyperspectral imaging is a method of imaging where light radiance is densely sampled at multiple wavelengths. Increasing spectral resolution beyond a traditional camera's red, green, and blue spectral bands typically requires more expensive detectors, optics, and/or lowered spatial resolution. However, hyper-spectral imaging has demonstrated its utility in computer vision, biomedical imaging, and remote sensing. In particular, spectral information is critically important for understanding material reflectance and emission properties for recognizing materials.
Spectral unmixing is a specific task within hyperspectral imaging with application to many land classification problems related to ecology, hydrology, and mineralogy. It is particularly useful for analyzing aerial images from aircraft or spacecraft to map the abundance of materials in a region of interest. While pure materials have characteristic spectral features, mixtures require additional adaptations to identify and quantify material presence.
It is with these observations in mind, among others, that various aspects of the present disclosure were conceived and developed.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
Corresponding reference characters indicate corresponding elements among the view of the drawings. The headings used in the figures do not limit the scope of the claims.
Various embodiments of a system and associated method for hyperspectral unmixing of measured spectra for quantifying pure materials in a mixture are described herein. Spectral variation in the mixture is identified by a dispersion model and incorporated into an end-to-end spectral unmixing system via differentiable programming that iteratively optimizes the dispersion model and identifies an abundance of each material in the mixture. A dispersion model is introduced into the unmixing system to simulate realistic spectral variation. Then, this dispersion model is utilized as a generative model within the spectral unmixing system utilizing an analysis-by-synthesis method to quantify materials in an observed mixture and optimizes parameters of the dispersion model. Further, a technique for inverse rendering using a convolutional neural network to predict parameters of the generative model is introduced to enhance performance and speed when training data is available. State-of-the-art results were achieved on both infrared and visible-to-near-infrared (VNIR) datasets as compared to baselines, and show promise for the synergy between physics-based dispersion models and deep learning in hyperspectral unmixing in the future. Referring to the drawings, embodiments of systems and an associated method for hyperspectral unmixing are illustrated and generally indicated as 100, 200 and 300 in
Referring to
The spectral unmixing system 100 includes the generative dispersion model 110 that generates and renders spectral endmember variation and other optical properties for various pure materials based on initial conditions and a measured spectra b. The spectral unmixing system 100 aims to recreate the measured spectra b using the dispersion model 110 by identifying one or more endmember spectra present in the measured spectra b. The endmember spectra are aggregated into a matrix A. The spectral unmixing system 100 further estimates dispersion model parameters ∧ to fit the endmember spectra A to the dispersion model 110. The spectral unmixing system 100 further provides a linear unmixing pipeline 120 for linearly unmixing an observed spectrum of the material. The linear unmixing pipeline 120 alternates between updating the dispersion model parameters ∧ for the dispersion model 110 and optimizing abundances x using a least-squares model. The output of the system 100 includes updated dispersion model parameters ∧ for the dispersion model 110, optimized abundances x for the material in question, and modelled spectra Ax =bout.
Further, the system 100 is unique in that it uses the generative dispersion model 110 to account for spectral variability in the unmixing pipeline 120, and iteratively solves spectral unmixing in a self-supervised fashion using analysis-by-synthesis optimization with the dispersion model 110 as a “synthesis” step.
A second embodiment of the system 200 is featured in
Linear mixing assumes electromagnetic waves produced from pure materials combine linearly and are scaled by the material abundance. Mathematically this is expressed as b=Ax+η where b is the measured spectra, A is a matrix whose columns are the pure material spectra, η is the measurement noise, and x is the abundance of each pure material. In simpler terms, the abundance x denotes a proportion of each observed material. Linear mixing assumes that the pure material spectra, referred to as endmember spectra, is known before-hand. Nonlinear effects are known to occur when photons interact with multiple materials within a scene.
A key challenge that affects both linear and nonlinear mixing models is that pure materials have an inherent variability in their spectral signatures, and thus cannot be represented by a single characteristic spectrum. Spectral variability of endmembers is caused by subtle absorption band differences due to factors such as different grain sizes or differing ratios of molecular bonds, as shown in
Recently, differentiable programming has become a popular research area due to its potential to bridge gaps between physics-based and machine learning-based techniques for computer vision and graphics. One key contribution of the present spectral unmixing system 100 is to leverage differentiable programming by modelling the variation of spectra of an observed mixture using the dispersion model 110, and incorporating this differentiable dispersion model 110 into an end-to-end spectral unmixing pipeline 120 that identifies abundances of each detected material in the observed mixture. Such an approach has the capacity to un-mix scenes with a large amount of variability, while constraining the predictions to be physically plausible. These physically plausible variations of endmember spectra also provide additional science data as the variation of absorption bands can reveal properties about the composition and history of the material. The present system 100 uses the generative physics dispersion model 110 to account for spectral variability in the linear unmixing pipeline 120.
Referring to
Further, a second embodiment of the present system, designated 200, defines the computing device 102 that includes a convolutional neural network 210 that jointly estimates parameters ∧ for the dispersion model 110 and mineral abundances x for spectral unmixing of a measured spectra b taken from the spectrometer 104. This system 200 requires training data but is computationally efficient at test time and outperforms analysis-by-synthesis and other state-of-the-art methods for determining parameters ∧ for the dispersion model 110.
Extensive analysis of the systems 100 and 200 is provided with respect to noise and convergence criteria. To validate, both synthetic and real datasets are used for testing using hyperspectral observations in the visible and near infrared (VNIR), and mid to far infrared (IR). The datasets also span three different environments from laboratory, aircraft, and satellite-based spectrometers. It was shown that the systems 100 and 200 achieve state-of-the-art across all datasets and are compared against several baselines from literature.
The present approach to hyperspectral unmixing as disclosed herein features two main components: (1) use of the physically-accurate dispersion model 110 for pure endmember spectra, and (2) a differentiable programming system 120 which is incorporated with the model 110 to perform spectral unmixing. This approach has synergistic benefits of leveraging prior domain knowledge while learning from data. The system 100 solves spectral unmixing in a self-supervised fashion using an analysis-by-synthesis optimization pipeline 120 with the dispersion model 110 as the synthesis step. Further, it is shown how inverse rendering via a convolutional neural network (CNN) 210 can learn parameters ∧ of the dispersion model 110 to help speed up the present system 100 and improves performance when training data is available.
Dispersion Model
First, the dispersion model 110 for generating endmember spectra will be discussed. “Endmember” and/or “endmember spectra” is what the spectral curve is called for emissivity f as a function of wavenumber w. Each pure material has a characteristic endmember spectrum, although spectra can vary, which is the problem the present disclosure aims to solve. Let ϵmeasured(ω) be endmember spectra which has been measured, typically in a lab or in the field, whose emissivity is sampled at different wavenumbers: [ϵmeasured(ω1), . . . ϵmeasured(ωN)]T. The present system 100 presents model ϵmodel(∧;ω) with parameters ∧ such that the following loss is minimized: L(∧)=Σi=1N (ϵmeasured(ωi)-ϵmodel(∧;ωi))2. That is, parameters ∧ that dictate the model emissivity of an endmember spectrum are fitted to the measured spectrum such that a loss between an endmember spectrum generated by the dispersion model 110 and an actual measured endmember spectrum is minimized. In practice, regularization and constraints need to be added to this endmember loss for better fitting which is described after the derivation of the dispersion model 110.
The dispersion model 110 of endmember spectra is derived from an atomistic oscillator driven by electromagnetic waves impinging on the molecular structure of the pure material as illustrated in
Let ∧=[ρ, 1070, γ, ϵr] be a matrix of spectral parameters, where ρ, ω0, γ, ϵr, ϵK and K is a model hyperparameter corresponding to the number of distinct mass-spring equations used to model the emissivity. ρ is the band strength, ω0 is the resonant frequency, γ is the frictional force (dampening coefficient), and ϵr is relative dielectric permeability. Note: usually ϵr is a constant vector which does not vary with K. Thus, ∧K+4. The refractive index terms n, k are given as follows:
where the expressions for θ, β, ϕ are given as follows:
Note that subscript k denotes the kth coordinate of the corresponding vector. Also there is another useful relation that n2-k2=θ, nk=ϕ. The complex refractive index is then defined as {umlaut over (n)}(∧;ω)=n(∧;ω)-i⋅k(∧;ω)where i=√{square root over (−1)} is the imaginary number. Hence, the emissivity can be calculated as follows
When considering minerals, MϵN, the number of optical axes of symmetry in crystal structures is introduced, (eg. 2 axes of symmetry in quartz) to define the full model:
where a different parameter matrix ∧m and weight αm is used for each optical axis of symmetry.
The dispersion model 110 has been primarily used to analyze optical properties of materials to determine n and k, which then can be subsequently applied to optical models like radiative transfer. After n and k are found, spectra such as reflectance, emissivity, and transmissivity can be generated. In particular, it should be noted that fine-grained control of the parameters ∧ of the dispersion model 110 can realistically render spectral variation that occurs in hyperspectral data. The spectral unmixing system 100 leverages these properties for spectral unmixing.
Using the dispersion model 110 presented above, the model parameters ∧ must be robustly estimated to fit the spectra ϵmeasured captured in a lab or in the field. In some embodiments, parameters ∧ are iteratively optimized with the abundances x in an alternating fashion until a loss between measured spectra b and fitted endmember spectra Ax is minimized. To fit the parameters ∧ of the dispersion model 110 to the measured spectra b, gradient descent must be performed to efficiently find these parameters ∧. Using chain rule on the loss function, it can be shown that
where
corresponds to that element of the parameter matrix, and all expressions are scalars once the coordinate is specified. While the partial derivatives can be calculated explicitly via symbolic toolboxes, the resulting expressions are too long to be presented here. For simplicity and ease of use, the autograd function in PyTorch was used to automatically compute derivatives for the model 110 with backpropagation.
One main challenge in performing endmember fitting is that the dispersion model 110 is not an injective function, and hence is typically not identifiable. In other words, more than one ∧ can result in the same fit. This can be solved, in-part, through regularization to enforce sparsity. In the present implementation, the dispersion model 110 is initialized with K=50 rows of the parameter matrix. Since the parameter p controls the strength of the absorption band, small values of p do not contribute much energy to the spectra (unnecessary absorption bands) and can be pruned.
Experimentation showed that after performing sparse regression by penalizing the L1 norm of ρ, K is typically reduced to around 10-15.
Thus, the modified sparse regression problem may be written as:
where ∧min and ∧max restrict the variation of the dispersion parameters ∧ to a plausible range. In addition, endmembers (particularly minerals) can have multiple optical axes of symmetry described by separate spectra, which has been noted in the literature. Without prior knowledge of the number of axes for every material encountered, this optimization is run for a single and double axes, and the one with the lowest error is selected.
Despite the fact that this regression problem is non-convex, it is solved using gradient descent with a random initialization; this is known to converge to a local minimum with a probability of 1. A global minimum is not necessary at this stage. Endmember fitting is used to provide a good initialization point for the subsequent alternating minimization procedure introduced in the next subsection.
Analysis-by-Synthesis Optimization: The full end-to-end spectral unmixing system 100 including the dispersion model 110 and linear unmixing module 120 is shown in
subject to sum-to-one and non-negativity constraints ∥x∥1=1, x>0. Given these constraints, one cannot impose sparsity with the usual L1 norm. Instead, the Lp norm is used to induce sparsity for predicted abundances x.
The key to the spectral unmixing system 100 is that everything is fully differentiable, and thus the following equation can be minimized:
with respect to both the parameters of the dispersion model ∧ and the abundances x. This gives the recipe for hyperspectral unmixing: first, perform endmember fitting to initialize A(∧). Then, solve Equation (7) in an alternating fashion for x and ∧. One could also solve this equation jointly for both unknowns, however, it was found that the alternating optimization was faster and converged to better results.
The optimization problem established in Equation (7) is an alternating minimization problem and is unfortunately not convex. One popular approach to tackle nonconvex problems is to find a good initialization point, and then execute a form of gradient descent. Inspired by this, A(∧) is first initialized by performing endmember fitting using Equation (6) as described in the previous subsection. Experiments that have been conducted indicate that this provides a useful initialization for our subsequent step. The unmixing pipeline 120 performs alternating minimization on Equation (7) for x and 6158 . Note that each iteration of the resulting alternating minimization involves the solution of a sub-problem which has a convergence rate which depends on the condition number of the matrix A(∧).
In the ideal scenario, this initial matrix A(∧) would consist of the endmember spectra that fully characterizes the mixed spectra b. However, since spectra for the same material can significantly vary (see
Inverse Rendering of Dispersion Model Parameters. The previous analysis-by-synthesis optimization does not require training data in order to perform spectral unmixing. However, there is room for even more improvement by using labeled data to help improve the parameter fitting of the model in the synthesis step. A convolutional neural network (CNN) 210 is trained to predict the parameters ∧ for a generative model, known as inverse rendering in other domains. In
The CNN 210 architecture consists of several convolutional layers 210A followed by a series of fully-connected layers 210B. Using the CNN 210 for inverse rendering is significantly faster at test time as compared to the analysis-by-synthesis optimization. However, the use of the CNN 210 does require training data, which is often unavailable for certain tasks/datasets.
Datasets. Three separate datasets were utilized to validate the present spectral unmixing system 100. In
Feely et al. Dataset. 90 samples are utilized from the Feely et al. dataset of thermal emission spectra in the infrared for various minerals measured in the lab. Ground truth was determined via optical petrography, and a labeled endmember library is provided. The limited amount of data is challenging for machine learning methods, so the dispersion model 110 is used to generate 50,000 additional synthetic spectra for dataset augmentation.
Gulfport dataset. The Gulfport dataset contains hyperspectral aerial images in the visible and near-infrared (VNIR) along with ground truth classification labels segmenting pixels into land types (e.g. grass, road, building). Although the dataset is for spectral classification, it can also be used to benchmark unmixing algorithms by creating synthetic mixtures of pure pixels from the Gulfport dataset with random abundances x. Spectral classification (Gulfport) and unmixing (Gulfport synthetic) tasks are performed in the testing. Both datasets are split into a train and test set (although some methods do not require training data), and the training data is augmented with 50,000 synthetically generated mixtures from the dispersion model 110.
One main difficulty of this dataset is the endmembers identified correspond to coarse materials such as grass and road as opposed to pure materials. Such endmembers can significantly vary across multiple pixels, but this spectral variation is not physically described by the dispersion model 110. To solve this problem, K-means clustering is used to learn exemplar endmembers for each category (e.g. grass, road, etc). Then, the resulting centroid endmember can be fit to the dispersion model 110 to allow further variation such as absorption band shifts in the spectra. It was found that K=5 worked the best for the Gulfport dataset.
TES Martian Dataset. The Thermal Emission Spectrometer (TES) uses Fourier Transform Infrared Spectroscopy to measure the Martian surface. The TES utilizes pre-processing from Bandfield et al. and the endmember library used by Rogers et al. to analyze Mars TES data. There is no ground truth for this dataset, as the true abundance of minerals on the Martian surface is unknown, so other metrics such as reconstruction error of the spectra are considered.
Baselines. The present system 100 is compared against several state-of-the-art baselines in the literature. The basic linear unmixing algorithm is Fully Constrained Least Squares (FCLS) which solves least squares with sum-to-one and non-negativity constraints on the abundances x. Two state-of-the-art statistical methods are used for modelling endmember variability as distributions: the Normal Compositional Model (NCM) and the Beta Compositional Model (BCM). NCM and BCM use a Gaussian and Beta distribution respectively, perform expectation-maximization for unmixing, and require a small amount of training data to determine model parameters ∧.
The present system 100 compares against two state-of-the-art deep learning networks by Zhang et al. The first network utilizes a 1D CNN (CNN-1D) architecture, while the second network utilizes a 3D CNN (but with 1D convolutional kernels) (CNN-3D). CNN-3D is only applicable to datasets with spatial information, and not testable on the Feely and Gulfport synthetic data. A modified CNN architecture was also created (CNN-1D Modified) to maximize the performance on the datasets by changing the loss function to MSE, removing max-pooling layers, and adding an additional fully connected layer before the output.
Endmember Fitting Results. To bootstrap both the analysis-by-synthesis and inverse rendering methods, good initial conditions for the dispersion parameters ∧ need to be input to the dispersion model 110. Determining dispersion parameters ∧ typically required detailed molecular structure analysis or exhaustive parameter searching methods. One main advantage of the present system 100 is that gradient descent is used to efficiently find parameter sets for different materials.
In
0.052
0.272
0.059
Spectral Unmixing Results. In Table 1, results on the Feely, Gulfport, and the Gulfport synthetic mixture datasets are shown. For Feely, the analysis-by-synthesis method achieved a MSE of 0.052, with the next closest method (NCM) achieving 0.119. Due to the Feely dataset only containing 90 test samples, the machine learning methods were trained on synthetic data which explains their lower performance as data mismatch. Thus, the low error of analysis-by-synthesis shows the utility of the dispersion model for modelling endmember variability, particularly in cases with low training data.
For the Gulfport dataset, the task was to predict the material present since the labeled data is for single coarse materials (e.g. road, grass, etc) at 100% abundance per pixel. Here, the deep learning methods of CNNs and Inverse Rendering have the highest performance. This is expected as there exists a large amount of training data to learn from. Note that Inverse Rendering performs the best at 0.272 MSE, demonstrating that the addition of a generative dispersion model to the output of the CNN improves performance over purely learned approaches. Also note that the present analysis-by-synthesis system 100 still has relatively high performance (0.45 MSE) without using any training data at all.
For the Gulfport synthetic mixture dataset, Inverse Rendering achieves the lowest MSE of 0.059, leveraging both physics-based modeling for spectral mixing as well as learns from available training data. The BCM and the analysis-by-synthesis methods embodied in the present system 100 both outperform the CNN methods, even though they do not have access to the training data. In fact, BCM even slightly outperforms the analysis-by-synthesis method, which could be because the sources of variation in this data are well-described by statistical distributions.
Speed of Methods. The additional capacity of adding statistical and physical models usually has a cost of speed in implementation. Averaged over 90 mixtures, the convergence for a single operation was FCLS-10ms, BCM-1.23s, NCM-18ms, CNN-33ms, Inverse Rendering-39ms, and analysis-by-synthesis-10.2s. Future work could potentially increase the speed of analysis-by-synthesis with parallel processing.
Noise analysis. Prior to spectral unmixing, emissivity is separated from radiance by dividing out the black-body radiation curve at the estimated temperature. In general, a Gaussian noise profile in the radiance space with variance σradiance2 results in wavenumber dependent noise source in the emissivity space with the profile σ2(ω)=σradiance2⋅1/B(ω,T) where B is the black-body function given by Planck's law. In the noise experiments a black-body radiation curve is used for a 330K target, which is the approximate temperature the Feely dataset samples were held at. In
The present system 100 varied the noise power to determine the methods' robustness tested on 30 samples from the Feely dataset. In
TES Data. The Mars TES data was unmixed using the analysis-by-synthesis method embodied by the present system 100 to demonstrate its utility on tasks where zero training data is available. The method produces mineral maps which correctly finds abundances of the mineral hematite at Meridiani Planum in
Certain embodiments are described herein as including one or more modules. Such modules are hardware-implemented, and thus include at least one tangible unit capable of performing certain operations and may be configured or arranged in a certain manner. For example, a hardware-implemented module may comprise dedicated circuitry that is permanently configured (e.g., as a special-purpose processor, such as a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware-implemented module may also comprise programmable circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software or firmware to perform certain operations. In some example embodiments, one or more computer systems (e.g., a standalone system, a client and/or server computer system, or a peer-to-peer computer system) or one or more processors may be configured by software (e.g., an application or application portion) as a hardware-implemented module that operates to perform certain operations as described herein.
Accordingly, the term “hardware-implemented module” encompasses a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner and/or to perform certain operations described herein. Considering embodiments in which hardware-implemented modules are temporarily configured (e.g., programmed), each of the hardware-implemented modules need not be configured or instantiated at any one instance in time. For example, where the hardware-implemented modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware-implemented modules at different times. Software, in the form of the system applications based on embodiments of systems 100/200 or method 300 or otherwise, may include a hardware-implemented module and may accordingly configure a processor 402, for example, to constitute a particular hardware-implemented module at one instance of time and to constitute a different hardware-implemented module at a different instance of time.
Hardware-implemented modules may provide information to, and/or receive information from, other hardware-implemented modules. Accordingly, the described hardware-implemented modules may be regarded as being communicatively coupled. Where multiple of such hardware-implemented modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware-implemented modules. In embodiments in which multiple hardware-implemented modules are configured or instantiated at different times, communications between such hardware-implemented modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware-implemented modules have access. For example, one hardware-implemented module may perform an operation, and may store the output of that operation in a memory device to which it is communicatively coupled. A further hardware-implemented module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware-implemented modules may also initiate communications with input or output devices.
As illustrated, the computing and networking environment 400 may be a general purpose computing device 400, although it is contemplated that the networking environment 400 may include other computing systems, such as personal computers, server computers, hand-held or laptop devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronic devices, network PCs, minicomputers, mainframe computers, digital signal processors, state machines, logic circuitries, distributed computing environments that include any of the above computing systems or devices, and the like.
Components of the general purpose computing device 400 may include various hardware components, such as a processing unit 402, a main memory 404 (e.g., a system memory), and a system bus 401 that couples various system components of the general purpose computing device 400 to the processing unit 402. The system bus 401 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. For example, such architectures may include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
The general purpose computing device 400 may further include a variety of computer-readable media 407 that includes removable/non-removable media and volatile/nonvolatile media, but excludes transitory propagated signals. Computer-readable media 407 may also include computer storage media and communication media. Computer storage media includes removable/non-removable media and volatile/nonvolatile media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules or other data, such as RAM, ROM, EPSOM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store the desired information/data and which may be accessed by the general purpose computing device 300. Communication media includes computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. For example, communication media may include wired media such as a wired network or direct-wired connection and wireless media such as acoustic, RF, infrared, and/or other wireless media, or some combination thereof. Computer-readable media may be embodied as a computer program product, such as software stored on computer storage media.
The main memory 404 includes computer storage media in the form of volatile/nonvolatile memory such as read only memory (ROM) and random access memory (RAM). A basic input/output system (BIOS), containing the basic routines that help to transfer information between elements within the general purpose computing device 400 (e.g., during start-up) is typically stored in ROM. RAM typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 402. For example, in one embodiment, data storage 406 holds an operating system, application programs, and other program modules and program data.
Data storage 406 may also include other removable/non-removable, volatile/nonvolatile computer storage media. For example, data storage 406 may be: a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media; a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk; and/or an optical disk drive that reads from or writes to a removable, nonvolatile optical disk such as a CD-ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media may include magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The drives and their associated computer storage media provide storage of computer-readable instructions, data structures, program modules and other data for the general purpose computing device 400.
A user may enter commands and information through a user interface 440 or other input devices 445 such as a tablet, electronic digitizer, a microphone, keyboard, and/or pointing device, commonly referred to as mouse, trackball, or touch pad. Other input devices 445 may include a joystick, game pad, satellite dish, scanner, or the like. Additionally, voice inputs, gesture inputs (e.g., via hands or fingers), or other natural user interfaces may also be used with the appropriate input devices, such as a microphone, camera 450, tablet, touch pad, glove, or other sensor. These and other input devices 445 are often connected to the processing unit 402 through a user interface 440 that is coupled to the system bus 401, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor 460 or other type of display device is also connected to the system bus 401 via user interface 440, such as a video interface. The monitor 460 may also be integrated with a touch-screen panel or the like.
The general purpose computing device 400 may operate in a networked or cloud-computing environment using logical connections of a network Interface 403 to one or more remote devices, such as a remote computer. The remote computer may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the general purpose computing device 400. The logical connection may include one or more local area networks (LAN) and one or more wide area networks (WAN), but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
When used in a networked or cloud-computing environment, the general purpose computing device 400 may be connected to a public and/or private network through the network interface 403. In such embodiments, a modem or other means for establishing communications over the network is connected to the system bus 401 via the network interface 403 or other appropriate mechanism. A wireless networking component including an interface and antenna may be coupled through a suitable device such as an access point or peer computer to a network. In a networked environment, program modules depicted relative to the general purpose computing device 400, or portions thereof, may be stored in the remote memory storage device.
It should be understood from the foregoing that, while particular embodiments have been illustrated and described, various modifications can be made thereto without departing from the spirit and scope of the invention as will be apparent to those skilled in the art. Such changes and modifications are within the scope and teachings of this invention as defined in the claims appended hereto.
This is a non-provisional application that claims benefit to U.S. Provisional Patent Application Ser. No. 63/041,733 filed 19 Jun. 2020, which is herein incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63041733 | Jun 2020 | US |