Various aspects of the disclosure relate to intraocular pressure sensor and reader.
Glaucoma is a leading cause of blindness, affecting an estimated four million Americans and seventy million individuals globally. As glaucoma typically affects the elderly, the aging demographic trends indicate that this disease will continue to be an ever-increasing socioeconomic burden to society. Elevated intraocular pressure (“IOP”) is a major risk factor for glaucoma, and IOP monitoring is the single most important clinical management tool.
Despite the pervasive use of IOP readings for disease monitoring and the clinically proven importance of the aggressive lowering of IOP, current clinical management is primarily based on only periodic snapshots of IOP in the doctor's office obtained every few months. The inability of patients to easily monitor their own IOPs at different times of the day or during various daily activities hinders the comprehensive understanding of the IOP profile of individual patients and the possibility of custom-tailored IOP control.
The need for better IOP monitoring in clinical ophthalmology and in disease research has been widely appreciated. Existing measurement techniques in clinical use measure IOP indirectly. Current IOP measurements involve a form of contact or noncontact application tonometry. However, both modalities have difficulties in providing reliable and repeatable readouts of actual IOP values inside the eye. All tonometers produce indirect IOP readings by deforming the ocular globe and correlating this deformation to the pressure within the eye. Their readouts are heavily influenced by the corneal curvature and thickness, or corneal mechanical properties that vary due to co-existing ocular pathologies. For example, patients who have received laser photorefractive keratectomy have thinner corneas in the treated eyes and consistently show lower IOP when measured using tonometry techniques.
Tonometry currently requires specialized equipment operated by an ophthalmologist, optometrist, or skilled technician. Hence, IOP measurements are made typically in a doctor's office about two to four times per year. Since studies show that IOP varies widely throughout the day, quarterly measurements are poor representations of a patient's actual IOP profile.
Example embodiments of IOP sensors and IOP measurement algorithms are disclosed. Certain embodiments of the IOP measurement algorithm comprise a signal demodulation and artificial neural network algorithm to produce reliable results using minimal computational resources. The overall accuracy and speed of the disclosed IOP sensor and IOP measurement algorithm enable their implementation in home-based IOP measurement system. In this way, patients can take periodic measurements throughout the day and report irregularities to an optician. Patients may also monitor their IOP in fine time resolution throughout a physically intensive routine such as a gym work-out to better understand movements that may trigger surges in IOP levels.
Certain embodiments of the IOP sensor can include: a first wall comprising a flexible membrane; a first chamber formed by the first wall and a second wall; and a first array of photonic components disposed inside of the first chamber. In some embodiments, a raised portion is located within the first chamber on a surface of the second wall, and the first array of photonic components is disposed on a surface of the raised portion.
Further embodiments of the IOP sensor can include: a first wall comprising a flexible membrane; a first chamber formed by the first wall and a second wall; a second chamber sharing the second wall with the first chamber; a plurality of openings located on a surface of the second wall to create a plurality of pass-through openings between the first and second chambers; and a first array of photonic components disposed inside of the first chamber.
Example embodiments of methods for manufacturing an IOP sensor are disclosed. Embodiments of the method can include: providing an array of photonic components on a substrate, wherein the array of photonic components is formed by a lithography process; submitting the array of photonic components on the substrate to a thermal reflow process to transform the array of photonic components to an array of hemispherical photonic components; attaching the array of hemispherical photonic components to a surface of a chamber; and sealing a side of the chamber with a flexible membrane.
Example embodiments of an apparatus for reading intraocular pressure are also disclosed. The apparatus can include: a housing structure; a reflectance probe having a light source, the reflectance probe being attached to the housing structure; an optical spectrometer configured to receive light from an IOP sensor, the optical spectrometer being attached to the housing structure and configured to output a resonance spectrum upon receiving the light from the collimating lens; and circuitry configured to determine the intraocular pressure based on one or more variations (e.g., peaks and valleys) in the resonance spectrum.
The foregoing summary, as well as the following detailed description, is better understood when read in conjunction with the accompanying drawings. The accompanying drawings, which are incorporated herein and form part of the specification, illustrate a plurality of embodiments and, together with the description, further serve to explain the principles involved and to enable a person skilled in the relevant art(s) to make and use the disclosed technologies.
As previously mentioned, glaucoma is one of the most prevalent and perplexing disease today. By 2020, it is projected that 79.6 million people worldwide will have the disease. Of those 79.6 million, 4.5 million of them will suffer from irreversible bilateral vision loss. Currently, IOP is the most modifiable risk factor for glaucoma and moderating elevated IOP levels by individualized medication or surgery is the only available therapeutic modality. IOP has a circadian rhythm that fluctuates between 10-21 mmHg throughout the day. This makes it difficult to diagnose and suggest treatments to patients based on sparse IOP measurements taken at a clinic. For example, in one study, a peak IOP measurement from a continuous 24 hour-monitoring-period was on average 5 mmHg higher than a peak IOP measurement taken at the clinic. This false low pressure reading at the clinic may result in improper treatment in 80% of the patients. As such there is a need for a fast and accurate IOP measurement sensor and method that could be implemented in a home-based IOP measurement system.
Current clinic-based IOP measurement technology such as Dynamic Contour Tonometry (DCT) is not suitable for a home-based implementation because it is both expensive and bulky. DCT is capable of sampling 100 IOP values per second with ±1 mmHg sensitivity. However, it requires very high computer processing power, which is one of the reasons why DCT is expensive and bulky (large computer equipment). Other existing IOP measurement technologies such as LC sensor implants, micro-fluidic channel sensors, and strain gauges lack the sensitivity and sampling time resolution compared to DCT. For example, LC sensor implants have a sensitivity of 2.5 mmHg and micro-fluidic channel sensors have a sensitivity of 0.5 mmHg but lacks the sampling speed and rate. Accordingly, there is a need for fast and accurate an IOP measurement technology that could be implemented in both commercial and home-based IOP measurement systems.
Generally, home-based IOP measurement technologies seek to achieve two performance goals: (1) high sensitivity: to measure IOP in sub-1 mmHg scale; and (2) high sampling frequency: to detect acute IOP fluctuations by obtaining an IOP profile with high temporal resolution. Provided herein are example embodiments of IOP sensors and IOP measurement methods that are both highly sensitive and faster than conventional techniques. These embodiments can have a sensitivity of ±0.01 mmHg and microsecond level processing time per signal. This is a considerable improvement in both accuracy and speed over existing IOP measurement technologies (e.g., DCT, LC sensor, micro-fluidic channel, and strain gauges).
In certain embodiments disclosed herein, the sensor for sensing biological pressure (e.g., intraocular pressure) can include an implantable device comprising a first membrane structure, an optional second membrane structure, and a plurality of photonic components adapted to reflect light, wherein the first and second membrane structures are separated by a gap and the first membrane structure is movable with respect to the second membrane structure in response to a change in ambient (surrounding) pressure such that the device has a resonance frequency that shifts as a size of the gap changes. A detection device can be adapted to transmit optical light to the implantable device and detect the resonance frequency of the implantable device based on at least one wavelength of light reflected from the implantable device. The detection device can be adapted to detect the resonance frequency based on a magnitude variation of the at least one wavelength of light reflected from the implantable device. The detection device can be adapted to determine the biological pressure based on the detected resonance frequency of the implantable device.
In some embodiments both the first and second membrane structures are deformable in response to the change in ambient pressure. Both the first and second membrane structures can be separated by one or more mechanical flexures. Also, both the first and second membrane structures can be rigid in some embodiments.
The resonance spectrum can then be processed by an IOP measurement algorithm that first converts the resonance spectrum in real time with a signal demodulation algorithm (SDA). The IOP measurement algorithm further processes the demodulated signal using an artificial neural networks (ANN) based algorithm. This combination of SDA and ANN algorithms produces IOP readouts with ±0.01 mmHg sensitivity and microsecond level processing time per signal. The SDA-ANN algorithm can represent a vast improvement in accuracy and speed over conventional technologies, which enables IOP sensor 110 and the SDA-ANN based algorithm to be implemented in home-based IOP measurement systems. For example, a commercially available DCT system (with a conventional IOP measurement algorithm) is capable of sampling 100 IOP values per second, but requires high computer processing power. The disclosed SDA-ANN based algorithm has a microsecond level processing time per signal (faster than current DCT system) without the need for an expensive and high-processing-power computer system.
Referring again to
The array of photonic components 115 may be disposed on a surface of raised portion 110. The photonic components described herein are components of the sensor and broadly serve to facilitate the reflection of light (photons) by the sensor. The photonic components, as will be described herein, can have various sizes and shapes, and can be arranged in arrays or patterns of various designs.
Each photonic component may be cylindrical in shape. In some embodiments, each photonic component has a hemispherical shape, which has a wide angle of reflectance thereby enabling light to enter at a wide range angle of incidence. In other words, the light source on the external reader does not have to be directly shone (zero angle of incidence) at IOP sensor 100 (which can be embedded in the eye of a patient). One benefit to the hemispherical shape across the array is a smoother reflectivity distribution of the optical resonance spectrum as sharper transition edges (between each photonic component) lead to a deeper dip in the reflectivity distribution. This also contributes to a wider-band resonance spectrum and less sensitivity to the angle of incidence of the light source. In some cases, the photonic components can be referred to as nanodots, but this term does not require the components to be sized on a “nanoscale” nor have a round shape absent explicit recitation of such in the claims.
In some embodiments, cavity 105 may include an optical cavity 210, which can be formed by a distance 205 between a horizontal tangential plane of nanodot array 115 and the bottom surface of flexible membrane 120. Distance 205 changes as flexible membrane 115 deforms toward or away from nanodot array 115 due to the changes in the intraocular pressure. Distance 205 determines the distribution of the reflectance spectra. In turn, one or more variations (such as a peak or valley) of the resonance spectrum can be used to determine the intraocular pressure. In some embodiments, distance 205 has an initial distance range of 5-10 μm. In some embodiments, distance 205 has an initial range of 7.3 μm.
As shown in
Because flexible membrane 115 can be very thin and potentially fragile, protective layer 125 may be disposed along the outer perimeter of membrane 115 but not to cover any portion of chamber 105. Protective layer 125 can be designed to give surgeons a surface to grab onto IOP sensor 110. This allows for the installation of IOP sensor 110 into a patient's eye without touching flexible membrane 115. In some embodiments, protective layer 125 may be made of silicone.
In some embodiments, the first array of nanodots 520 can be disposed on a surface of a wall 550, which forms one of the two walls of chamber 505. Wall 550 can be the middle wall, which forms chamber 505 and chamber 510. Flexible membrane 540 forms the second wall of chamber 505. As shown, the first array of nanodots 520 may be disposed in the center of surface 550. Although not shown, the first array of nanodots may be disposed on a raised surface, which raises the first nanodot array 520 toward flexible membrane 540.
The second array of nanodots 525 may be disposed at a bottom surface (e.g., the surface closer to chamber 505) of flexible membrane 540. The space between the first and second nanodot arrays forms an optical cavity. The optical cavity may have a thickness 560 of approximately 7.4 microns. It should be noted that other suitable thickness may be employed.
As shown in
Next, a layer of photoresist 1120 can be coated onto layer 1115. Photoresist layer 1120 is then exposed and developed to form an array of nanodots. In general, there are two types of photoresist: a negative and a positive acting resists. A negative photoresist goes through a photo-hardening process when exposed to ultraviolet (UV) light. A positive photoresist goes through a photo-softening process during UV exposure and leaves the same pattern on the resist film after UV exposure. It should be noted that the substrate 1110 may be produced using both positive or negative resist. In some embodiments, a positive resist can be used to ultimately create the nanodot array pattern on photoresist layer 1120. Once photoresist layer 1120 is patterned and exposed to UV light (cured), it goes through a chemical bath to lift off the portion that is exposed to the UV light. This leaves behind a pattern of holes where the nanodots will be formed.
If a negative resist is used, then a negative pattern of nanodots will be patterned onto substrate 1115. Patterning may be done using photolithography, which employs a mask. Alternatively, the array of nanodots pattern can be directly patterned onto substrate 1115 using electron beam lithography.
As depicted in
As depicted in
To obtain IOP measurements, an illuminant probe can be configured to shine near infrared (NIR) light at IOP sensor 100. The NIR light may have a wavelength between 700-1150 nm. In this range, the NIR light is invisible to the human eye. This range of wavelength can be desirable because it is invisible to the human eye. Further, this range of wavelength has an excellent tissue penetration property. Because IOP sensor 100 may incorporate hemispherical nanodot array, the illuminant probe does not have to shine the NIR light directly at sensor 100. The hemispherical shaped nanodot array enables the array to receive light at a wider range of angle of incidence and still can reflect light back at a near perpendicular angle with respect to the horizontal plane of the nanodot array.
The reflected light (also referred to as a reflection spectrum) is then captured by an optical spectrometer that is running the SDA-ANN algorithm. In some embodiments, the SDA-ANN enabled optical spectrometer receives the reflected spectrum and produces a resonance waveform 1210 as shown in
At stage 1310, denoising and low pass filtering can be performed on the signal. This preprocessing step can be performed to identify and compensate for misalignments between the illuminant probe and IOP sensor 100. The first step of stage 1310 can be to categorize the misalignment between the illuminant probe and the IOP sensor using a valley detection module. By appropriately categorizing the misalignment, the valley detection module can instruct the user to make the appropriate correction in real-time in order to achieve a better or optimum signal. Generally, there are two orientation conditions—transverse and longitudinal—that should be met in order to obtain proper resonance spectra. First, the light probe should be fixated at a transverse coordinate within a valid focal range in order to prevent over and under-reflection. Second, the convergence point of the light should be in close proximity to the sensor diaphragm center in order to mitigate peripheral reflections from the cavity contour. When these three positioning conditions are not satisfied, the reflection spectrum becomes saturated, which will overwhelm optical sensors in the spectrometer. Alternatively, incorrect probe positioning may also lead to no signal or cause the reflection spectra to have a single peak waveform resulting from the black-body radiation profile of the light source.
In some embodiments, when a saturated signal is detected, the valley detection module may generate an instruction to instruct the user to adjust the focal length of the illumination light source to be shorter. In other words, a saturated signal indicates that the focal point the illumination light source is too long and extends beyond optical cavity of IOP sensor 100. Alternatively, when no signal is detected, the valley detection module may generate an instruction to direct the user to adjust the focus length and make it shorter. A no signal indicates that the focal length of the light source is too short. Additionally, when a single peak spectra occurs, an instruction can be generated to direct the user to translate the probe in the horizontal direction with respect to the eye or IOP sensor 100. In some embodiments, the instruction generated to the user may be audio, textual, graphical, or a combination thereof.
In some embodiments, the valley detection module can be modified to recognize misalignment waveforms such as saturation, single peak intensity, or no signal. For example, minima detection may be converted into a peak detection problem by inverting the spectrum. In some embodiments, a minimum peak prominence threshold may be set to detect both the peak intensity and low prominence peak (no signal), both of which indicate a misalignment. Further, the gradient of the intensity spectrum may be swept to check for extended regions having zero slope to filter out misalignment caused by saturation. In some embodiments, the peak detection function (of the valley detection module) may output the location of the valleys that met the threshold requirements. If no valleys were detected, the optical detector and IOP sensor are misaligned. In this case, the main program terminates with an IOP value of zero and the valley detection module informs the user of a misalignment in the set up. If more than one valley was detected, the algorithm proceeds to input the extracted valleys to the trained ANN.
In some embodiments, the illumination probe may be coupled to one or more servomotors or linear actuators and IOP process 1300 may be configured to send instruction to a controller to automatically adjust the illumination probe based on the SDA alignment analysis. In this embodiment, the illumination probe and optical spectrometer may be mounted on a stationary reader such as a portable table-top unit.
As mentioned, part of the misalignment detection algorithm includes spectral feature recognition algorithm that detects valleys. At stage 1320, on a high level, if the valleys are found to meet the detection thresholds, the SDA may conclude that the setup is well-aligned and proceeds to the ANN execution step.
In some embodiments, the first step of the valley detection module can be to smoothen the raw intensity silhouette(v_1) in order to counteract white noise from impairing the accuracy of valley detection. Next, a finite impulse response filter can be employed to compute the moving average of the 20 consecutive data points to attenuate the high frequency noise components. The additive inverse of the intensity spectrum can be then inputted into a peak detection module in order to convert the problem of minima detection into maxima detection, and thereby allowing the usage of powerful peak detection functions included in software packages such as MATLAB.
In some embodiments, three thresholds were integrated into the peak detection in order to track the desired valley locations. A minimum peak prominence (p_min) threshold was set in order to filter out misalignment (ii) and (iii) both of which display a single low prominence peak. Second a minimum peak distance(d_min) threshold was applied to deter remnant noise in the vicinity of one peak from being detected as multiple peaks. Lastly, the gradient of the intensity spectrum was swept to check for extended regions having zero gradients to filter out misalignment (i).
In some embodiments, the signal demodulation algorithm at stage 1320 includes a peak detection function configured to output the location and prominence of the valleys found satisfying the requirements set by the thresholds, which can be the minimum peak prominence threshold. If no valleys were detected, the probe and IOP sensor are misaligned. In this case, the main program module outputs the discovered misalignment with an IOP value of zero. If more than one valley was detected, the algorithm proceeds to input the extracted valleys to the ANN stage (stages 1330 and 1340) where the best theoretical spectra for each measured spectrum is best fitted. Once this is done, an air gap and sensor pressure relationship may be accurately obtained based on the best-fit spectra to pressure model.
In some embodiments, the ANN algorithm includes a hidden layer with 10 neurons activated by a hyperbolic tangent non-linearity and one linear output layer. The mean squared error was used as the loss function for training. A deeper architecture was not necessary and could potentially lead to overfitting since one hidden layer was sufficient to achieve excellent out-of-sample performance. In some embodiments, the optimum number of neurons was found by cross validation in which the dependence of the estimated test accuracy on the number of hidden units was evaluated. The results are shown in
In some embodiments, a training procedure may be implemented to train the ANN algorithm prior to using the ANN algorithm to measure a real-time IOP from a patient's eye.
In some embodiments, a feature vector can be used to train the ANN algorithm. The feature vector can be represented as an array as xinput=[λ1, λ2, λ3,Δλ12, Δλ23], which includes a fixed number of valleys and the wavelength difference between each adjacent pair. These valley spacing features are then weighted based on the corresponding pressures in the training set, which will be used to predict an output intraocular pressure based on the features of any arbitrary optical spectrum. Referring back to
In training the ANN algorithm, the network can be trained with and without the valley spacing parameter and a great improvement in calibration accuracy can be achieved when spacing was included.
In some embodiments, the ANN algorithm has unit training set of 40000 points recorded during a linear rise in pressure. Multiple rounds of unit training sets were generated and summed up into one training set and fed into the ANN learning algorithm. A training session took on average 2.5 seconds and 550 epochs. The loss function with respect to the training epochs is shown in
In some embodiments, an optomechanical model can be used to determine the intraocular pressure in IOP sensor 100. The optomechanical model consists of two steps. First, a model was developed to determine the air gap between the sensor's silicon nitride membrane and its silicon base given an IOP (see line 1510 in
In some embodiments, the ANN algorithm was trained to perform within a clinically relevant 0 to 30 mmHg range which corresponds to the pressure range of interest. To test the neural network calibration accuracy in the full detection range, the pressure response of the IOP sensor can be compared with the readings from the reference pressure sensor for three pressure cycles, each of which is from 1 mmHg to 30 mmHg (see
IOP process 1300 has high temporal resolution which enables it to track transient IOP spikes. To appreciate the sampling frequency of the ANN based SDA of IOP process 1300, the response of the ocular implant to a high frequency pressure fluctuation may be analyzed.
The accuracy of the frequency detection can be demonstrated by examining the FFT spectrum while varying the applied frequency in 0.05 Hz increments from 10 Hz to 10.95 Hz. The resulting output frequency vs target frequency is illustrated in
The frequency detection experiments described above was conducted with medium amplitude (≈4 mmHg) fluctuations in pressure. To verify the aptitude of the system in detecting acute high-amplitude fluctuations of pressure, a 20 mmHg pressure spike was induced using a syringe pump and compared the results from the reference pressure sensor with the ANN output. The erratic pulsatile waveform plotted in
It should be noted that the algorithms or instructions of IOP measurement process 1300, which includes a signal demodulation, valley/peak detection, spectral features recognition and matching, and an artificial neural network algorithms as previously described, may be stored on a memory that is readable by a computer. A computer may be a processor or an application specific integrated circuit (ASIC). When the algorithms or instructions are executed by the computer, the instructions will cause the computer to carry out the functionalities of IOP measurement process 1300 as described above.
These results suggest that any of IOP sensors 100, 300, 500, 700, 800, 900, and 1000, when processed with the ANN based SDA algorithms, can accurately characterize the intraocular pressure fluctuations ranging from small IOP pulses synchronized with the cardiovascular system to sporadic high amplitude spikes resulting from ocular hypertension or surgical stimulation.
In some embodiments, a broadband light source in the visible or the invisible spectrum can be used. Probe-sensor 1710 can be configured to illuminate the implanted sensor using broadband light and then detect the reflection from the sensor. For IOP readout, the reflected light may be relayed to a commercial mini spectrometer embedded in portion 1715 of apparatus 1700. In some embodiments, wearable apparatus also includes a display 1730 for displaying the IOP readout and/or the resonance spectra of the reflectance light. In some embodiments, wearable apparatus includes an ASIC (see
In the example of
The processing circuit 1804 can be responsible for managing the bus 1802 and for general processing, including the execution of software module/engine stored on the machine-readable medium 1806. In some embodiments, SDA-ANN module 1850 include algorithms as described in process 1300, when executed by processing circuit 1804, causes processing system 1814 to perform the various functions described herein for any particular apparatus. Machine-readable medium 1806 may also be used for storing data that is manipulated by processing circuit 1804 when executing software module/engine.
One or more processing circuits 1804 in the processing system may execute software module/engine or software module/engine components. Software module/engine shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software module/engines/engines, applications, software module/engine applications, software module/engine packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software module/engine, firmware, middleware, microcode, hardware description language, or otherwise. One or more processing circuits (or processing circuitry) may perform the tasks. A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, an engine, a module, a software module/engine package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory or storage content. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
The software module/engine may reside on machine-readable medium 1806. The machine-readable medium 1806 may be a non-transitory machine-readable medium. A non-transitory processing circuit-readable, machine-readable or computer-readable medium includes, by way of example, a magnetic storage device (e.g., hard disk, solid-state drive), an optical disk (e.g., a compact disc (CD) or a digital versatile disc (DVD)), a smart card, a flash memory device (e.g., a card, a stick, or a key drive), RAM, ROM, a programmable ROM (PROM), an erasable PROM (EPROM), an electrically erasable PROM (EEPROM), a register, a removable disk, a hard disk, a CD-ROM and any other suitable medium for storing software module/engine and/or instructions that may be accessed and read by a machine or computer. The terms “machine-readable medium”, “computer-readable medium”, “processing circuit-readable medium” and/or “processor-readable medium” may include, but are not limited to, non-transitory media such as portable or fixed storage devices, optical storage devices, and various other media capable of storing, containing or carrying instruction(s) and/or data. Thus, the various methods described herein may be fully or partially implemented by instructions and/or data that may be stored in a “machine-readable medium,” “computer-readable medium,” “processing circuit-readable medium” and/or “processor-readable medium” and executed by one or more processing circuits, machines and/or devices. The machine-readable medium may also include, by way of example, a carrier wave, a transmission line, and any other suitable medium for transmitting software module/engine and/or instructions that may be accessed and read by a computer.
The machine-readable medium 1806 may reside in the processing system 1814, external to the processing system 1814, or distributed across multiple entities including the processing system 1814. The machine-readable medium 1806 may be embodied in a computer program product. By way of example, a computer program product may include a machine-readable medium in packaging materials. Those skilled in the art will recognize how best to implement the described functionality presented throughout this disclosure depending on the particular application and the overall design constraints imposed on the overall system.
One or more of the components, steps, features, and/or functions illustrated in the figures may be rearranged and/or combined into a single component, block, feature or function or embodied in several components, steps, or functions. Additional elements, components, steps, and/or functions may also be added without departing from the disclosure. The apparatus, devices, and/or components illustrated in the Figures may be configured to perform one or more of the methods, features, or steps described in the Figures. The algorithms described herein may also be efficiently implemented in software module/engine and/or embedded in hardware.
Note that the aspects of the present disclosure may be described herein as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.
While certain example embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention should not be limited to the specific constructions and arrangements shown and described, since various other modifications are possible. Those skilled in the art will appreciate that various adaptations and modifications of the just described preferred embodiment can be configured without departing from the scope and spirit of the invention. Therefore, it is to be understood that, within the scope of the appended claims, the invention may be practiced other than as specifically described herein.
This filing claims the benefit of and priority to U.S. Provisional Application Ser. No. 62/261,176, filed Nov. 30, 2015, U.S. Provisional Application Ser. No. 62/274,470, filed Jan. 4, 2016, and U.S. Provisional Application Ser. No. 62/287,329, filed Jan. 26, 2016, all of which are incorporated herein by reference in their entireties for all purposes.
Number | Date | Country | |
---|---|---|---|
62287329 | Jan 2016 | US | |
62274470 | Jan 2016 | US | |
62261176 | Nov 2015 | US |