The present disclosure generally relates to tomography, and more specifically to electrical impedance tomography.
Two-dimensional resistance tomography utilizes a resistive elastomer sensing membrane to produce a change in resistance when contact pressure is applied. Resistance change is measured through periphery contact electrodes to generate a tomographic image of low resolution. To increase the tomographic image resolution, a large number of periphery contact electrodes are required to generate a large amount of data that is needed to feed computation intensive mesh algorithms. The amount of data and computational complexity does not assure that the measurements will converge to a solution, which wastes computing resources. In short, traditional algorithms suffer from wasted resources or suffer from low resolution that comes with failing to provide the required amount of data due to reliance on an ill-defined mesh problem in the algorithm, to poorly placed contact electrodes and to non-optimal electrode measurement pairs.
The disclosure is better understood with reference to the following drawings and description. The elements in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the disclosure. Moreover, in the figures, like-referenced numerals may designate to corresponding parts throughout the different views.
High Resolution Two-Dimensional Resistance Tomography
The measured voltage VCD may be converted to digital format (e.g., digital data) through an analog to digital converter ADC 312. A respective tetra-polar resistance (rABCD)i corresponding to a respective voltage and current ratio (rABCD)i=VCD/IAB may be stored in a memory to be processed by a microcontroller MCU 314. Different electrode contact pairs such as EF and GH may also be simultaneously monitored as current leads and voltage probes until some or all N possible combination of electrode pairs are tested to measure the remaining tetra-polar resistances (rABCD)i to (rABCD)N.
Accordingly, a 2-D tomographic image 304 over a surface 302 may be mapped by a computer implemented method by executing the following steps or operation. The first step defines a surface area 302 of a resistive sensing membrane 301 having Q periphery contact electrodes “A to H” that are attached along the periphery of the surface area of the resistive sensing membrane 301. In this step, Q is an integer (eight are illustrated in
The method further includes the step of mapping a 2-D resistance tomographic image 304 over the defined surface area 302 of the resistive sensing membrane 301. The mapping renders a plurality of local area resistance values (rABCD)i to (rABCD)N that reflect the applied contact pressure “F” to the surface area of the resistive sensing membrane.
The 2-D resistance tomographic image mapping further includes measuring the plurality of local area resistances (rABCD)i to (rABCD)N sequentially, over each and every N maximum combinations of two periphery contact electrode pairs from among the Q periphery contact electrodes “A to H”. The result is a respective tetra-polar resistance (rABCD)i, wherein i=1 to N, and
wherein each respective tetra-polar resistance (rABCD)i corresponds to a respective voltage and current ratio (rABCD)i=VCD/IAB, such that a respective voltage VCD is established across a first periphery contact electrode pair CD when a respective current IAB is simultaneously passed across a second periphery contact electrode pair AB. The first periphery contact electrode pair CD is different from the second periphery contact electrode pair AB, wherein the respective tetra-polar resistance (rABCD)i reflects a local area resistance variation in a resistivity map ρ(r) of the 2-D resistance tomographic image. The resistivity map ρ(r) is related to the orthogonal basis polynomial functions ϕi(r) by ρ(r)=Σi ai ϕi(r), and the resistivity map ρ(r) is formed by superimposing the orthogonal basis polynomial functions ϕi(r). The orthogonal basis polynomial functions ϕi(r) have a resolution that increases with a degree of freedom set at an upper limit that is the same as the maximum combinations of N measurements. Here “a” is comprised of “a1, a2, . . . ai, . . . ” that represent ordered vector coefficients. The 2-D resistance tomographic image is displayed through the resistivity map ρ(r) on the defined surface 302.
The computer implemented algorithm is modified to map a tomographic image 360 across a volume 350 beneath a surface 362. The method includes defining a resistive volume 350 having Q surface contact electrodes A to J attached on the defined surface area 362 of the resistive volume 350. Q is an integer, such as nine in this example (e.g., preferably greater than or equal to five), where a plurality of local volume resistances (rABCD)i to (rABCD)N are defined. The local volume resistances vary with depth and material compositions (e.g., the tissue types and densities) beneath the defined surface area 362 of the resistive volume 350. The variations cause a three-dimensional (3-D) resistance variation. The method further includes mapping a 3-D resistance tomographic image over the defined resistive volume 350 according to the plurality of local volume resistances (rABCD)i to (rABCD)N beneath the defined surface area 362 of the resistive volume 350.
The 3-D resistance tomographic image mapping further includes measuring the plurality of local volume resistances (rABCD)i to (rABCD)N sequentially, over each and every N maximum combinations of two periphery contact electrode pairs (e.g., AB, CD, etc.) from among the Q periphery contact electrodes A to J. The result is a respective tetra-polar resistance (rABCD)I measure where i=1 to N, and
Each of the respective tetra-polar resistance (rABCD)i corresponds to a respective voltage and current ratio (rABCD)i=VCD/IAB. A respective voltage VCD is established across a first surface contact electrode pair CD when a respective current IAB is simultaneously passed between a second surface contact electrode pair AB. The first surface contact electrode pair CD is different from the second surface contact electrode pair AB. The respective tetra-polar resistance (rABCD)i reflects a local volume resistance variation in a resistivity map ρ(r) of the 3-D resistance tomographic image. The resistivity map ρ(r) is related to orthogonal basis polynomial functions ϕi(r) that is part of the expression ρ(r)=Σi ai ϕi(r). The resistivity map ρ(r) is formed by superimposing the orthogonal basis polynomial functions ϕi(r). The map has a resolution that increases with a degree of freedom set at an upper limit same as the maximum combinations of N. The variable “a” is comprised of “a1, a2, . . . ai, . . . ”, which are the ordered vector of coefficients. The detection displays the 3-D resistance tomographic image through the resistivity map ρ(r) beneath the defined area 362.
The disclosed method improves tomographic resistance image resolution by adopting an orthogonal basis with a maximum number of elements N, which renders a maximum resolution resistivity map ρ(r). The number of elements N is determined by the number of electrodes Q. The detection defines the orthogonal basis according to any known constraints in a problem, thereby enhancing the resolution where ever it is needed. The detection positions the electrodes such that they are sensitive to these basis functions. The selection of current I and voltage V contact electrode pairs maximize the signal-to-noise ratio output.
Some standard methods for electrical impedance tomography solve the inverse mapping problem by defining thousands of mesh points to represent a resistance map that is consistent with a much smaller set of measurements that is orders of magnitude smaller in size than the disclosed detection. As such, these finite-element methods present an ill-defined problem such that the number of variables to be solved greatly exceeds the number of equations required to constrain them. Under these conditions, a large amount of computational power is wasted on calculating an unnecessarily large number of mesh points, and the resulting solution is not unique, depending on the choice of mesh or other minor boundary conditions. Subsequently, a regularization procedure must be performed to include a cost-function in the solution to artificially induce smoothness in the final result.
More specifically, the disclosed 2-D and 3-D methods devise an alternate strategy for the inverse problem in electrical impedance tomography, which improves detection resolutions and reduces computational time. The 2-D and 3-D methods takes the following approaches:
(1) Set the number of orthogonal basis functions for the resistivity map N equal to the maximum number of independent resistance measurements, thereby guaranteeing a maximum resolution. If Q is the number of contacts, then the number of independent measurements Nis:
The number basis functions may be restricted to the number of degrees of freedom, which make the solution unique, rather than ill-defined.
(2) Execute a set of orthogonal basis functions ϕi(r) to describe the resistivity map ρ(r). Traditional tomographic methods may define a high-resolution mesh with thousands of points to describe the resistivity map. The mesh points are not independent of each other, as such they must be artificially correlated by adding an additional cost-function term in a regularization procedure. However, the disclosed approach executes an orthogonal basis functions ϕi(r) to describe the resistivity map ρ(r). The resistivity map ρ(r) may be described as an ordered vector of coefficients a=(a1, a2, . . . ai, . . . ) that may be expressed by Eq. 2.
ρ(r)=Σi aiϕi(r) (2)
Such basis functions may be proposed a priori from a set of orthogonal polynomials, or may be derived from a covariant analysis of a set of known resistivity maps.
For 2-D tomographic resistance imaging, the defined surface area 302 of the resistive sensing membrane 301 may comprise any arbitrary shape. For simplification, in a use case where the defined surface area 302 is circular, the orthogonal basis polynomial functions ϕi(r) may be a priori polynomial basis functions described by the Zernike polynomial equations, as shown in
The integer n={0, 1, 2, . . . } ranks the resolution of the polynomial from low to high, and m satisfies −n≤m≤n. The radial function is described by Rnm(ρ) and the azimuthal function is a sine or cosine function with a harmonic order m. These basis functions are all orthogonal to each other, the coefficient vector “a” in Eq. 2 represents a compact expression of the complete set of all possible resistivity maps described by the basis, where a cutoff assuming the maximum number of allowable basis states N is imposed, where in
For 3-D tomographic resistance imaging, the volume may have an arbitrary shape. For simplification in a use case when the defined volume is spherical, the orthogonal basis polynomial functions ϕi(r) are a priori polynomial basis functions may be described by spherical harmonic equations: expressed below.
S
l
m(ρ,θ,φ)=ρlYlm(θ,φ)
Y
l
m(θ,φ)=eimφPlm(cos θ) (4)
where the functions Plm(x) are associated Legendre polynomials:
such that the integer l={0, 1, 2, . . . } ranks the resolution of the polynomial from low to high, and m satisfies −l≤m≤+l.
A second example of an a priori polynomial basis may be a constrained polynomial basis.
A third example of an orthogonal basis is determined by applying a principle component analysis (PCA) to a representative set of likely resistance maps “a”. The covariance matrix of the resistance maps may be expressed as:
Cov(a)=Γa (6)
which can be diagonalized
Γa=WTΛW (7)
where the matrix Λ is a diagonal matrix, and WWT=I.
Λ=diag(λ1,λ2, . . . ,λN) (8)
The eigenvalues of the covariance matrix can be ordered λ1≥λ2≥ . . . ≥λN, and the largest {circumflex over (N)} eigenvalues of the covariance matrix as the principle components.
ΓaPCA=WTΛPCAW,ΛPCA=diag(λ1,λ2, . . . ,λ{circumflex over (N)},0, . . . ,0) (9)
Here W is comprised of all eigenvectors, W=[w1 w2 . . . wN]. Thus, the orthogonal basis then can be represented by the reduced basis w1, w2, . . . , , and the eigenvectors W of the covariance matrix with largest eigenvalues □N are used as orthogonal basis functions with index i whose upper limit N is the same as the maximum number of independent tetra-polar measurements.
A fourth example of an orthogonal basis is a combination of the above methods (e.g., a priori, constrained, and principle component analysis basis functions). In this example, the principle component analysis of the third method may reveal only a limited number {circumflex over (N)} of basis functions before the covariance vanishes into the noise. But since the total number of independent measurements in the problem is N from Eq. 1, the disclosed method allows the remaining N−{circumflex over (N)} basis functions to be determined as a priori polynomial functions or constrained a priori polynomial functions, chosen to be orthogonal to the {circumflex over (N)} members of the principle component basis.
Another approach tailors the choice of orthogonal basis functions to the highest resolution in the constrained region where the information is most critical relative to a known background or other constraint. This approach overcomes the disadvantages in uniform finite element meshes over a volume. If the region of interest in finite element meshes is local within that volume, then computational time and mathematical resolution is wasted on regions that are not useful.
If there are constraints and/or local regions of interest in the tomographic problem, then the orthogonal basis functions can be restricted to map features within only that region. Thus, the full power of the tomographic resolution is devoted to the region where information is needed.
The disclosed detections select electrode locations that have the highest resolution in discerning the orthogonal basis functions of interest. Current tomographic methods may place contacts at regular intervals around the periphery of the resistive object's volume to be mapped. This is detrimental for two reasons.
First, symmetric placement of contacts may result in a reduced number of independent measurements, reducing the maximum achievable resolution of the resistivity map. This point is illustrated in
Second, many tomographic systems may have a known background resistivity. The goal is to map only deviations from the resistivity. Strategic placement of contact electrodes may result in maximum sensitivity to these deviations. The disclosed detection applies the orthogonal basis functions to determine where contact electrodes should be placed to have maximum sensitivity in discerning independent measurements.
The detections also identify what pairs of current and voltage electrodes should be measured to provide the maximally independent set of complete measurements while maximizing signal-to-noise measurements. There are very many ways to collect a complete measurement set of N independent tetra-polar resistances, and by optimizing the choice of current-electrode pairs and voltage-electrode pairs, the disclosed detection makes it possible to choose a set that gives maximally independent measurements, and a maximal signal-to-noise ratio output. This is illustrated in
An exemplary application of a tomographic device is a pressure-sensitive polymer pad infused with carbon-nanotubes, carbon-black, or a combination of conductive particles that cause the polymer resistivity to change locally under an applied force as shown in
The resistivity pattern can be measured as shown in
In
In
The detection identifies which pairs of current and voltage electrodes are measured to provide maximal independence. In
Optimizing Electrical Impedance Tomography
Electrical impedance tomography (EIT) is a noninvasive imaging method whereby electrical measurements on the boundary of a conductive medium (e.g., the data) may be taken according to a prescribed protocol set and inverted to map the internal conductivity (e.g., the model). Described herein is a sensitivity analysis method and corresponding inversion and protocol optimization method that may generalize the criteria for tomographic inversion to minimize the model-space dimensionality and maximize data importance. The methods described herein may use sensitivity vectors represented as rows of a Jacobian matrix in the linearized forward problem to map targeted conductivity features from model-space to data-space. A volumetric outer-product of these vectors in model-space, referred to as the sensitivity parallelotope volume, may provide a figure-of-merit for data protocol optimization. Orthonormal basis functions that accurately constrain the model-space to features of interest may be defined from a priori information. By increasing the contact number to expand the number of possible measurements Dmax, and by reducing the model-space to a minimal number M0 of basis functions to describe the features of interest, the M0<<Dmax sensitivity vectors of greatest length and maximal orthogonality that span this model-space may be identified. The reduction in model-space dimensionality may accelerate the inversion by several orders of magnitude. The enhanced sensitivity may tolerate noise levels up to 1,000 times larger than standard protocols.
Electrical impedance tomography (EIT) of conducting volumes holds great promise as an alternative to X-ray computed tomography (CT) and magnetic resonance imaging (MRI) with applications in medicine, engineering, and geoscience, but is not commonly used because of its diffuse nature. The disadvantage of EIT's diffuse nature may be compensated for by the advantages of being radiation-free, easily portable, wearable, and low-cost. But two longstanding challenges have prevented EIT from realizing rapid refresh rates and being robust against experimental noise. First, the computational time of existing algorithms typically increase geometrically with the number of model-space parameters, and the ill-posed nature of the inverse problem under standard algorithms may require a dense mesh of up to 10,000 such model parameters. Thus, fast EIT refresh times may become feasible only at the cost of reducing resolution. Second, present-day EIT relies on measurement protocols that include small-signal measurements that are vulnerable to experimental noise. Error in these small-signal measurements may unavoidably impair the fidelity of the inverse problem, thereby reducing the resolution. Described herein is a new sensitivity analysis method which may accelerate the computations to reduce the computational time by orders of magnitude by reducing the model-space to the resolution of interest, and may improve the robustness to noise by orders of magnitude by eliminating low sensitivity measurements from the EIT protocol.
The remainder of this disclosure is structured as follows. Section 2 will review EIT concepts and introduce the notation adopted herein for the disclosed methods of optimizing EIT. Section 3.1 introduces the reduced model-space, which may be chosen to have a minimized number of elements to represent the desired resolution or the features of interest. Section 3.2 explains determining a quantity of contacts that may be ideal to benefit from the sensitivity analysis method. Section 3.3 introduces the measurement-space whereby a deliberately large number of contacts may be chosen to exceed a minimum necessary to map the model-space. Section 3.4 introduces the data-space that may be chosen from this vast measurement-space to maximize sensitivity and linear independence. Finally, Section 3.5 introduces two iterative Newton's method approaches to solve the inverse problem under this sensitivity paradigm. Section 4 shows the results of simulations that apply these methods to different applications. An outline overview of the sensitivity enhanced measurement analysis and protocol is provided below:
(i) Model-Space (Sec. 3.1)
(ii) Contact Allocation (Sec. 3.2)
(iii) Measurement-Space (Sec. 3.3)
(iv) Data-Space (Sec. 3.4)
(v) Inverse Solution for Sensitivity Analysis (Sec. 3.5)
Components at the essence of electrical impedance tomography (EIT) are reviewed in this section to establish the notation used in the remainder of the present disclosure. The generalized forward problem
d=F(m) (10)
relates a vector m in model-space, whose basis vectors describe the conductivity distribution, to a vector d in data-space, whose basis vectors describe the measurements taken. Typically, this forward problem may be directly solved using Maxwell's equations and various boundary conditions, and details may be found in the book “Electrical Impedance Tomography: Methods, History and Applications” by Holder [1].
EIT is classified as an inverse problem, whereby the data vector d is given, and the model vector m is to be solved for. In the presence of experimental data noise n, the observed data dobs may be written as:
d
obs
=F(m)+n. (11)
The forward operation F(m) may be Taylor expanded to linear order,
where α∈{1, . . . D0} indexes the data values from 1 to the rank of the data vector D0 (the dimensionality of the data-space), and β∈{1, . . . , M0} indexes the model elements from 1 to the rank of the model vector M0 (the dimensionality of the model-space). The forward problem may then be represented by the Jacobian matrix
d
obs
=Jm+n, (13)
where each element of the Jacobian quantifies the change in a data-space element with respect to a change in a model-space component. If the magnitude of a Jacobian matrix element is large, then small changes in the corresponding model-space component may result in large swings in the observed data, implying high sensitivity of that particular measurement to that feature in model-space. It is precisely this concept of sensitivity which inspires the figure-of-merit for optimizing measurement protocols in Section 3.
2.1. The Model-Space
In most EIT treatments, the model-space is represented with a finite element mesh. The number of independent model elements may be indexed b∈{1 . . . Mmax}, where Mmax is the number of independent mesh elements, and Mmax is sufficiently large for the discrete mesh to approximate a continuum solution to Maxwell's equations. But in order to solve the inverse problem, the conductivity is typically determined for each of these Mmax mesh elements, and the number of unique data values D0 measured at the sample boundary is typically much less than the number of mesh points D0<<Mmax. The resulting inverse problem may therefore be greatly under-determined and lacking a unique solution, producing instead many equally likely conductivity maps for a given set of boundary data values. Thus, inverse solving methods typically implement regularization techniques to artificially select one solution from among the others, usually prioritizing smoothness with a weighting term in the regularization.
As an alternative to representing model-space with a large number Mmax of mesh pieces, one can instead represent it with a reduced number M0 of orthonormal basis functions that are defined over the mesh whereby M0<<Mmax. Conductivity maps may then be represented as linear superpositions of these individual basis functions, and prior knowledge of the problem may be used to select appropriate basis functions. These basis functions may thereby reduce model-space dimensionality, and may reduce or eliminate the under-determined nature of the problem.
A reduced dimensionality model-space may have the advantage of greatly reduced computational time. For example, any EIT inversion requires Jacobian solvers and their computation time is empirically observed to scale approximately as M03/2 with the number of model-space elements M0 [2]. One way to reduce the size of the model-space is the approach taken by Vauhkonen et al., who created orthonormal basis functions for thorax EIT imaging using anatomical information and a proper orthogonal decomposition (POD) of previously collected data [3]. A similar approach was taken by Lipponen et al. who used POD to produce orthonormal basis functions for the location of circular inhomogeneities within a sample of otherwise homogeneous conductivity [4]. The resulting basis functions show strong resemblance to orthonormal polynomial basis functions defined over a circular domain called Zernike polynomials [5]. Together, these results suggest that for a circular sample in the absence of data for proper orthogonal decomposition, the Zernike polynomial functions provide an excellent empirical basis. When selecting a sub set of these polynomials to use as a basis, Vauhkonen et al. recommend using a number M0 less than or equal to the amount of information being measured M0≤D0 to reduce the ill-posed nature of the problem [3]. This recommendation is supported by Tang et al., whose work indicates that one should also consider any knowledge of relative feature location when selecting basis functions [6].
2.2. The Data-Space
Each four-point resistance measurement may correspond to a single data element in the data vector to be inverted in the tomographic problem. From the C available contact electrodes, each resistance measurement may be made with two current and two voltage contact pairs, constituting a four-point measurement, often called a “tetrapolar” resistance measurement in the biomedical device community [7]. Adopting the van der Pauw notation [8], the current and voltage contacts of each four-point measurement are designated [I+, I−, V−, V+]. Though some EIT efforts consider two-point measurements [9] [10], the analysis here will be restricted to the four-point configuration since it tolerates finite contact impedances common to most experimental applications, such as in biomedicine [11]. When a particular set of D0 data measurements is designated for tomographic inversion, this sequence will be referred to as a measurement protocol.
There may be a large number of possible four-point measurement protocols for a given number of contacts C. There are
different ways of selecting a particular combination of 4 contacts i, j, k, l, and there are three unique four-point resistance values that may be measured with these four contacts, for example, [i, j, k, l], [j, k, l, i], and [i, k, j, l]. All other permutations of these same 4 contacts may yield either the same resistance value due to the Onsager reciprocity relation (whereby resistance values are equal when current and voltage pairs are swapped) [12] or the negative thereof (due to polarity switching.) It is worth noting that, of the three measurement values above, only two are linearly independent, and the third one is a linear combination of the other two. Thus, the total number of data measurements that yield different, but not necessarily independent, resistance values is:
This number Dmax includes many resistance values that are not linearly independent such as when two measurements share three contacts, [i, j, k, l] and [i, j, l, m], and the resistance measurement [i, j, k, m] returns a value that is merely the sum of the first two. To make every possible measurement would not only be impractical with the number of measurements Dmax diverging as O(C4), it may also be entirely unnecessary given the amount of redundant information described above. Thus, a typical measurement protocol may concern a smaller data subset D0 of these measurements such that D0<<Dmax.
The number DI of possible independent measurements for C contacts [13] is important in the classification of a protocol as redundant, efficient, or compliant:
If the number of measurements in a protocol D0 is less than the number of independent measurements D0<DI, then we will describe the protocol as “compliant,” meaning that there is independent information available that may not be measured. If, on the other hand, D0>DI, then we shall call such a protocol “redundant,” since more measurements may be taken than necessary to acquire all the independent data. Finally, the case where all possible independent data measurements are represented in the protocol D0=DI, shall be called “efficient”. The above definitions are summarized in Table 1.
Redundant protocols may be helpful in noise reduction, since in the presence of experimental noise, a redundancy factor r defined as D0=r DI may reduce the overall measurement noise by a factor of √{square root over (r)}. However, both redundant and efficient protocols may include many low sensitivity measurements which, in the presence of experimental noise, may be a primary source of errors in the tomographic inversion.
Therefore, a key strategy of the disclosed methodology is to recognize that compliant protocols have the advantage of facilitating a sub selection of only the highest signal measurements for the protocol. Thus, compliant protocols such as those introduced in Section 3 may provide great robustness to experimental noise, as demonstrated in the simulated tomographic inversions in Section 4.
In choosing independent measurements to comprise a protocol, it is useful to identify which measurements may be relatively insensitive to changes in model-space, and which measurements may use similarly positioned contacts so as to be sensitive to largely the same features. Low sensitivity measurements may be identified by inspection as measurements where either the current or voltage terminal pairs (or both) neighbor each other, often referred to as “adjacent” measurements [14]. If the current pairs are neighboring, the voltage may be dropped only over a small region of the sample between those two contacts where no voltage contacts are available to measure. If the voltage terminals are neighboring, small voltage amplitudes may be measured. One may naively think that the best protocols with the strongest signals would then require that one avoid neighboring current or voltage pairs. However, the second problem of similar measurement pairs may come into play. Consider a low sensitivity measurement [i, j, l, m] where, for example, voltage contacts l and m are neighboring each other. This low sensitivity measurement may be added to a high sensitivity measurement [i, j, k, l] to yield the high sensitivity measurement [i, j, k, m]. Although both [i, j, k, l] and [i, j, k, m] may have high sensitivity and may be linearly independent from each other, their similarity aside from a small sensitivity measurement means that they may effectively provide no stronger a measurement than if the low sensitivity measurement were, itself, in the protocol instead of either of these.
We will now examine some protocols and qualify them according to their compliant, efficient, or redundant character, as well as whether they are prone to low sensitivity measurements as described above. In the published literature, most EIT studies use the Sheffield protocol to take four-point resistance measurements [13]. Under this protocol, the number of measurements is DS=C(C−3)=2DI, which are taken “pairwise adjacent” whereby two current contacts neighbor one another and two voltage contacts neighbor one another. Starting with a given current pair, the nearest voltage pair is measured, and then the voltage pair rotated around the sample for each subsequent measurement until all remaining contact pairs have been measured. Then, the current pair is rotated by one step and the process repeated [13]. For C contacts indexed counter-clockwise around the sample, the full Sheffield protocol is tabulated below for the first current pair in the first column, the second current pair in the second column, etc.:
The C-columns times (C−3)-rows in the table above confirm the number of measurements DS for the Sheffield protocol. The protocol redundancy factor of r=2 arises due to the Onsager reciprocity relation, whereby the top row is redundant with the bottom row, the second row with the second-to-last row, etc. When this redundancy is removed, the resulting efficient protocol with DI measurements will be referred to as the reduced Sheffield protocol. The empirically derived Sheffield protocol has the advantage of being easy to implement while treating all contacts symmetrically and has withstood the test of time in that no significantly superior redundant (or efficient) protocol in 2D has been determined, especially when no prior knowledge exists for the system under study.
Other heuristic protocols have been proposed, for example, throughout a series of papers by Adler to satisfy external measurement constraints [14], [15]. One of these suggests, for example, that adjacent measurements are not always ideal, especially for medical applications where low current levels are medically safer. Adler's publications developed nine (9) problem-specific figures-of-merit for trial and error comparisons of measurement protocols [14][15][16][17][18][20]. However, as mentioned previously, all such efficient or redundant protocols suffer from the same drawback: they include either low sensitivity measurements or, equivalently, measurement pairs which differ by a low sensitivity measurement, since these protocols seek to exhaust as many independent measurements as possible for a given number of contacts. As a result, such protocols may be susceptible to low noise levels. As a final note, the measurement configurations proposed for these heuristic protocols are chosen in a somewhat ad hoc manner through trial and error, rather than according to a mathematically rigorous figure-of-merit. Section 3 discloses sensitivity as an appropriate figure-of-merit to identify the optimal compliant protocol from an increased space of possible measurements with increased contact number. The resulting sensitivity protocol is expected to be more robust to noise, while being mathematically justifiable through a figure-of-merit.
Other recent EIT efforts have tried to increase EIT performance through the selection of electrode placement and measurement protocols. For example, Karimi describes how variations in electrode placement can maximize “information gain” by using a double-loop Monte-Carlo approximation [21], and Smyl selected low-noise electrode positions for the 2D problem through deep learning with prior knowledge of the system under study [22]. The methods the sensitivity protocol uses for the measurement protocol optimization problem may also be applied to improve this electrode position optimization problem. Recent work by Ma uses a reduced selection of model-space features and measurement protocols via Bayesian methods for hand gesture recognition, using their methods to distinguish between eleven (11) different gestures [10]. Section 4 demonstrates how our disclosed sensitivity protocols show improved distinguishability in the presence of experimental noise.
2.3. Linear and Nonlinear Solutions
The methods to solve inverse problems fall under one of two categories, linear or nonlinear. In the linear problem introduced in Eq. 13, the forward problem is treated as simple matrix multiplication. The corresponding inverse problem may be written as:
m=J
−g(d+n), (17)
where J−g is a generalized inverse of the Jacobian. In the case where model and data-spaces have the same size M0=D0, the Jacobian is square and the inverse of the Jacobian matrix J−1 may be substituted (J−g=J−1).
However, the general EIT inverse problem is inherently nonlinear. One approach to solve this nonlinear problem is iteratively, such as with Newton's method, which may solve the nonlinear problem by solving a linear approximation at each iteration. For example, from the Jacobian of a previous iteration the data residuals of the present iteration may be calculated at each step. Each iteration may be treated as an update to the one prior, starting with the problem reference as the initial “guess”. The iterative process may stop once this updated correction is deemed sufficiently small that the problem has converged to an optimal fit to the data. Iterative Newton's method processes are the most common methods for solving the EIT problem, including the popular open source MATLAB package EIDORS [23]. This package allows the use of mesh-based model-spaces and the selection of arbitrary data-spaces to simulate the linear and nonlinear forward EIT problem and solve the inverse EIT problem with a choice of various inverse solvers, such as the regularized inverse Newton's method approach described in Section 3.5.2.
3.1. Model-Space for Sensitivity Analysis
We first address the selection of model-space and model reference under the sensitivity analysis, with the intent of leveraging prior knowledge to minimize the model-space dimension M0. Following Section 2.1, the first step may be to determine a reference model from prior knowledge which will reduce the number of computational steps to converge on a final solution (Operation 1402). This reference may represent the background conductivity of the problem from which the final solution should be perturbatively related, if at all possible. An inhomogeneous reference has been encouraged by Grychtol and Adler whenever prior knowledge of the system allows [24]. The next step of the sensitivity analysis may be to identify a minimal number M0 of model basis functions to describe the features that are to be discerned so that the corresponding computational load may be minimized (Operation 1404). Again following Section 2.1, the minimal basis may either be determined according to a resolution criterion or by targeting specific features of interest, such as through proper orthogonal decomposition. [3]
The model-space that will be used in an example for illustration of these concepts herein will be a circular domain with orthonormal polynomials as basis functions. To simplify the choice of reference in the present example, we employ a homogeneous constant background reference for compatibility with generic 2D systems. Given the circular symmetry of our test problem, Zernike polynomials defined over a mesh will serve as conductivity basis functions for the model-space. The orthonormal Zernike polynomial functions are defined on a unit disk in terms of polar coordinates Znk(r, θ), and represent increasing resolutions determined by a radial polynomial order n and an azimuthal order k [25] as illustrated in
where Rn|k| is defined as
The normalization prefactor Fnk normalizes the integral of the polynomials over the unit circle,
and contains the Kronecker delta function δn0. The Zernike polynomials satisfy the orthonormal condition,
∫θ=02π∫r=01ZnkZn′k′,drdθ=δn,n′δk,k′. (21)
Using this polynomial basis, we may represent any conductivity map as a linear combination of Zernike polynomials:
σ(r,θ)=Σn=0NΣk=−n,−n+2, . . . nmn,kZnk(r,θ), (22)
where N is the highest polynomial order of the Zernike polynomials used. The weight of each polynomial is given by the coefficient mn,k, whose components together define the model-space vector representation of the conductivity. For a given polynomial order N, the dimension MZ(N) of the model-space up to and including that polynomial order is:
3.2. Contact Allocation for Sensitivity Analysis
A design aspect unique to the sensitivity analysis is contact allocation—determining a quantity of electrical contacts C that provide the desired resolution or features. For standard protocols the logic is normally reversed: the number of contacts C is given a priori, necessitating DI or more independent measurements according to Eq. 15, where the number of measurements D0=rDI is either efficient (r=1) or redundant (r>1), as described in Section 2.2. For example, the reduced Sheffield protocol is an efficient protocol with r=1 and the standard Sheffield protocol is redundant with redundancy factor r=2. In the sensitivity analysis disclosed herein, the number of possible independent measurements DI should greatly exceed the number of measurements in the protocol D0=M0, whereby D0<<DI. In practice, this is achieved by simply increasing the number of contacts, per Eq. 14. One may explicitly invert Eq. 14 and deduce the following expression for the minimum number of contacts C needed to make D0 independent data measurements:
where the ceiling function ceil(x) returns the smallest integer greater than or equal to x. According to Section 3.4, a compliant protocol is defined by the condition D0<DI, so a compliance factor c may be defined as follows: DI=CD0. And the number of contacts to achieve this compliance condition for an exactly determined Jacobian may therefore be C(cM0), thereby determining the number of contacts directly from the number of model-space basis functions M0 as well as the compliance factor c (Operation 1406). A compliance factor of c≥3 may be recommended, and the larger c is, the likelier the protocol may be optimized using only large-signal measurements. Note that unlike the Sheffield protocol, there may be no redundant measurements, as the sensitivity analysis may rely on enhanced sensitivity rather than measurement redundancy to increase tolerance to noise.
3.3. Measurement-Space for Sensitivity Analysis
Measurement-space is introduced as an integral component of the sensitivity analysis method disclosed herein. Borrowing from the data-space discussion in Section 2.2, recall that the total number of possible data measurements Dmax greatly exceeds the number of linearly independent measurements DI. In this section, we shall define the space of all Dmax possible measurements as the measurement-space and introduce a metric whereby different measurements within this space may be quantified for their ability to distinguish model features, namely the sensitivity vector.
The sensitivity analysis proceeds as follows. Recalling the definition of a Jacobian, we define sensitivity coefficients Jα,β (Operation 1408) that measure the sensitivity of each a∈{1, . . . Dmax} data measurement to each β∈{1, . . . M0} model element:
This definition is identical to the Jacobian matrix elements of Eq. (12), except now the index a spans all of the Dmax possible data measurements, not merely the D0 data measurements that are used for the tomographic inversion. We will refer to this larger matrix as the sensitivity matrix of dimension Dmax×M0, from which D0 select rows will be drawn to form the D0×M0 Jacobian matrix for the linearized forward problem. These rows, then, form an important component of the analysis, as each possible four-point measurement may be represented as a vector μa in model-space expressing the sensitivity of all model elements with respect to the ath data measurement:
μa=[Ja,1Ja,2 . . . Ja,M
The protocol optimization problem may now be reduced to selecting the D0 rows from this sensitivity matrix (of dimension Dmax×M0) that represent the data measurements best suited to resolving the M0 desired model features.
Note that the sensitivity vectors and sensitivity matrix are not limited to the Zernike polynomial basis. Other bases may be preferred when given prior knowledge of the system. For example, proper orthogonal decomposition of a prior dataset allows orthogonal features to be ranked from highest to lowest relevance in the hierarchy. [3] Other a priori resolution requirements or targeted feature bases may be adapted, as well.
3.4. Data-Space for Sensitivity Analysis
The D0 data measurements that comprise the data-space may be selected from the Dmax total measurements in measurement-space as an optimized subset that can span the desired M0-element model-space (Operation 1410). To simplify the inverse problem and eliminate the need for regularization, an exactly determined square matrix may be chosen for the data-space Jacobian such that D0=M0. As previously emphasized in Section 2.2, when optimizing a protocol it is important to avoid both measurements with low sensitivity and pairs of measurements with high similarity.
To address these protocol criteria, first a pruning threshold is determined to discard the lowest sensitivity measurements from consideration in the protocol. The magnitude of a sensitivity vector |μα| may quantify the sensitivity of that measured data value to any changes in the model, where vectors with greater magnitude may be more noise-robust and lead to smaller posterior variance in the estimated model parameters from the inverse problem. Thus, a ranked list of the Dmax sensitivity vectors may be pruned, whereby those shorter than a certain threshold may be considered low sensitivity and discarded. Whereas pruning may not be necessary for the moderately low number of contacts C=27 considered here with Dmax=52,650 different measurements per Eq. 14, it may be anticipated to be an important step in geometries with significantly more contacts.
Second, a criterion may be determined whereby similar measurements may be avoided within a protocol. In model-space, this similarity may manifest as nearly parallel orientation of the respective sensitivity vectors. The more orthogonal the sensitivity vectors are, the more they may represent independent measurements, giving a smaller posterior covariance of the model parameters in the inverse problem. Optimal protocols may therefore be formed from sets of long, mutually orthogonal sensitivity vectors, with the figure-of-merit for such a protocol being the D0-dimensional sensitivity parallelotope volume subtended by the sensitivity vectors of the constituent measurements. For example, starting with two measurements, a parallelogram in model-space may be formed, representing the area V2 subtended by the two respective sensitivity vectors μ1 and μ2. This area may be maximized provided the two chosen vectors are as long and orthogonal as possible. An additional third measurement with sensitivity vector μ3 may be most sensitive to independent model-space components provided its perpendicular component with respect to the plane of the original parallelogram is maximized, yielding a three-dimensional volume V3, etc. Thus, a D0-dimensional sensitivity parallelotope volume in model-space may be determined (Operation 1412). The product of the perpendicular components for all D0 sensitivity vectors in a measurement protocol may give the following definition for the D0-dimensional sensitivity parallelotope volume:
V
D
=ΠαD
The perpendicular component μα⊥ of each successive measurement may be determined from the parallel component μα∥ of the αth sensitivity vector relative to the subspace spanned by the orthogonal components of the preceding α−1 measurements:
The perpendicular component μα⊥ is then:
μα⊥=μα−μα∥ (29)
The sensitivity parallelotope volume in model-space may be more formally calculated from the Jacobian made up of the sensitivity vectors of the D0 selected measurements. The sensitivity parallelotope volume associated with the Jacobian matrix is:
V
D
=√{square root over (det(JJT))}, (30)
and is equivalent to that derived in Eq. 27. The above sensitivity parallelotope volume figure-of-merit may allow an objective and generalized optimization of the tomographic protocol.
An example of the concept of parallelotope volume is illustrated in
Several different search algorithms have been developed that maximize the sensitivity parallelotope volume. One such algorithm is a guided greedy search that may work well for smaller contact numbers C<50. This algorithm may buildup a protocol one measurement at a time, such that each newly added measurement produces the largest possible volume with respect to all prior selected measurements in the protocol. After a new measurement is selected, a secondary check of the other measurements may ensure that an alternate measurement will not increase the sensitivity parallelotope volume further. When a more optimal measurement is discovered during this double check, the checking process may start over again. Using this algorithm, and a compliance factor of c=3 corresponding to a three-fold increase in contact number over standard protocols, we have generated measurement protocols whose geometric averaged sensitivity vectors are two (2) to three (3) orders of magnitude larger than those of the reduced Sheffield protocol. This leads to a commensurate increase in the noise tolerance such that two (2) to three (3) orders of magnitude more noise may be tolerated while yielding a faithful tomographic map of the conductivity. This guided greedy search may be adapted to include a simulated annealing or Markov Chain Monte Carlo approach to add randomness to the search to improve search efficiency in the ever increasing measurement-space. Notably, this optimization process may only be computed once for each EIT problem prior to any inverse solving, enabling rapid real-time refresh of the tomographic inverse problem.
3.5. Inverse Solution for Sensitivity Analysis
To complete this EIT sensitivity analysis method, the inverse problem is solved (Operation 1414). As mentioned in Section 2, this is often done via a Newton's iterative method, whereby the nonlinear EIT problem is solved linearly with each iteration. In the following subsections, we will describe two approaches to a single iteration of a Newton's method solve. Either method may be iterated to solve the nonlinear problem. However, we have found that if perturbations from the initial solve guess are sufficiently small, a single iteration of the below methods provide sufficient convergence to solve the EIT inverse problem, producing equivalent results in a fraction (e.g., 1/100th) of the time taken by an iterative solve. For example, certain applications which may require rapid refresh, real-time tomography may benefit more from instantaneous inversion than having a few more significant digits of precision in the reconstruction.
3.5.1. Singular Value Decomposition (SVD): One method commonly used for Newton's method iterations is (Truncated) Singular Value Decomposition (T)SVD solving. Singular value decomposition refers to the decomposition of the Jacobian into three (3) unitary matrices of left and right singular vectors (U, V) and a diagonal matrix of singular values (S).
J=USV
T. (31)
Here, the number of non-zero singular values in S is equivalent to the rank of the Jacobian. For each singular value, U has a column to represent the contributing linear combination of data-space elements and a column of V stores the corresponding linear combination of model-space elements. The columns of U are the eigenvectors of JT, while the columns of V are the eigenvectors of JTJ. In (T)SVD solving, the decomposition in Eq. 31 forms the generalized inverse of J:
J
−g
=VS
−1
U
T. (32)
The combination of Eqs. 17 and 32 produce the following expression for the inverse solution in the presence of noise:
m=VS
−1
U
T(d+n), (33)
where (d+n=dobs) is the observed data that includes noise and m is the best possible reconstruction from the noisy data.
In the case that some singular values in S are metrically or numerically equivalent to zero, truncation may be used to replace the small singular values with zeros. If the small singular values were kept for inverse solving, their reciprocals within S−1 may approach infinity and cause widely inaccurate inverse solutions due to noise. Truncation may remedy this by ensuring that noisy information is ignored. To indicate truncation to k singular values, the singular value matrix may be written as Sk for some k<rank (S):
The truncation point k may be selected with the discrete Picard condition [26] to maintain a solution that is not consumed by noise. This condition may create Picard coefficients from the observed data and the U matrix from singular value decomposition [27]:
p=U
T(d+n). (35)
The Picard coefficients may show a decay that decreases upon reaching the noise level n. The discrete Picard condition states that the decay of these Picard coefficients must be faster than the decay observed in the singular values s found in the diagonals of Sk. If there exists an index k such that the singular value sk is decaying faster than the Picard coefficient pk, truncation may become necessary. This (T)SVD solution is a fast and efficient way to form the inverse. In the nonlinear regime, the problem may be iterated with either the same or varying truncation points. In this work, we have written MATLAB code to perform non-iterative (T)SVD solves for our arbitrary polynomial basis function model-spaces.
Not all EIT problems may be solved within the aforementioned linear regime, providing a use case for iterative solving. In practice, the same measurement protocol may be used for each iteration of a nonlinear solve. However, with a sufficiently large library of optimized sensitivity measurement protocols, one may start with the best protocol for the reference guess and switch to an improved protocol based on the results of an initial linear solve. Then, the new protocol may be used in the iterations.
3.5.2. Regularized Inverse
As mentioned in Section 2, the EIT community has developed multiple methods for improving iterative Newton's method solves of the EIT problem, one of which is by use of a regularized inverse. This popular approach, as described by Graham et al. [28], uses the regularized inverse to solve the inverse problem as:
m=(JTWJ+λG)−1JTW(d+n), (36)
where G is a regularization matrix determined by the type of regularization used, such as flattening or smoothing, and W is a matrix that models noise. The hyperparameter λ controls the extent to which regularization may be applied to the problem.
Either the regularized inverse approach or the (T)SVD approach previously described may be used with optimized sensitivity measurement protocols and reduced dimensionality model-spaces. For the sake of simplifying the examples in the following section, we chose to use our (T)SVD based solver within the linear, single iteration regime.
In this section the signal to noise advantage of the sensitivity measurement protocols disclosed herein is illustrated through the application of EIT to phantom models for problems in biomedical imaging and structural engineering. In the first example, a medical imaging scenario shows how the selection of data-space measurements via the sensitivity analysis method results in a factor of 30 times higher noise tolerance compared to the reduced Sheffield protocol. In the language of linear algebra, the sensitivity optimized measurement protocols may lead to a posterior model covariance matrix that has both low variance and low covariance values. In the second example, the same analysis is shown for a structural engineering scenario in which prior knowledge is utilized in the selection of basis functions leading to an even greater factor of 1,000 times increase in noise tolerance compared to reduced Sheffield. The EIDORS EIT package was used to solve the forward problem and plot the reconstructions seen in the following examples. The inverse solves were performed using our own linearized SVD solver as described in section 3.5.1.
4.1. Biomedical Example
One popular proposed biomedical application for EIT technology is chest imaging. For example, measurements of stroke volume and ejection fraction for a heart may diagnose cardiac conditions such as heart failure and coronary syndrome [29] [30] [31] [32]. To simulate this, a circular sample with a homogeneous conductivity serves to model a chest cavity cross section, and a smaller included circle with 20% higher conductivity represents the blood-filled heart. For the sake of this demonstration of methods, other chest cavity features were ignored. In practice, features of varying conductivity, such as the lungs, would be included as an inhomogeneous background reference. The steps of the sensitivity analysis disclosed herein were applied to this scenario. First, the homogeneous circular reference and a desired spatial resolution corresponding to M0=27 polynomials were used to form a Zernike basis model-space. For the exactly determined inverse problem, an equivalent data-space dimensionality of D0=27 independent measurements may be required. From Eq. 24, this may require at least C (D0)=9 contacts, chosen to be equally spaced. However, as recommended in Section 3.2. a compliance factor of c=3 is used to determine a better contact number for maximizing sensitivity, namely C (CD0)=27. Next, a standard EIT forward solver [33] is used to calculate the sensitivity coefficients that make up the sensitivity vectors for each of the Dmax=52,650 possible measurements. Then, the sensitivity optimization algorithm disclosed herein generated an optimal data-space of D0 of these measurements to maximize the sensitivity parallelotope volume. In order to compare with standard EIT methods, a DI=27 measurement data-space was also formed from the C(DI)=9 contact reduced Sheffield protocol. Using the parallelotope volume as a basis for comparison, the sensitivity protocol had a 27-dimensional parallelotope volume that was 1028 times larger than that of the Sheffield protocol, corresponding to an average enhancement of the sensitivity by a factor of 11 times per measurement.
To show the consequences of this enhanced sensitivity in the tomographic inversion, this EIT example was solved with our own linear, (T)SVD solver per Section 3.5. To avoid biased inverse solving, simulated data was generated on a very fine mesh and the Zernike polynomial model-space was used for inversion. Gaussian noise of varying amplitude with standard deviation η was added to the simulated data to emulate realistic experimental conditions. After solving noisy data sets for both our protocol and the reduced Sheffield protocol, results were compared to assess the ability of each protocol to differentiate between a “normal heart” and an “enlarged heart”.
As a metric for differentiating these two cases, a distinguishability criterion z was determined, inspired by the work of Adler [14]. The distinguishability metric treats the average result of many noisy EIT inverse solves as a vector in model-space
where E[⋅] is the ensemble expected value. The variance of all the noisy inversions of a given case form an ellipsoidal surface on model-space described by the covariance matrix Σ:
whereby the variances and covariances are, respectively:
σβ2=E[∥mβ−
σβγ2=E[(mβ−
for β, γ∈{1, . . . M0}. When comparing two distinct ensembles of noisy inversions, in this example a “normal heart” with average model-space vector
Δmn,l2=ûTΣn,uû, (41)
where the unit vector û pointing from
Distinguishability z may then be defined as the distance in model space between the two cases relative to the combined standard deviations of the two cases,
For z>>1, the distinguishability is very high, and the resulting reconstructed images of the two cases are clearly different from each other in spite of the noise. Furthermore, the reconstructed images for each case is indistinguishable by eye from the noiseless reconstruction. However, when the value of z is less than 1, noise has overwhelmed the discernable difference between the two cases, and the patterns are no longer reliably distinguishable, nor do the reconstructed images exactly resemble the noiseless reconstruction.
In
4.2. Structural Engineering Example
The second demonstration of the sensitivity analysis method described herein will use a concrete pillar as a structural engineering example. In practice, cylindrical concrete pillars may be reinforced with equally spaced conducting wrought iron reinforcement rods. Maintenance inspections of historical structures or enforcement of building codes may require identifying the amount of reinforcement in a finished structure. To simulate this problem, the cross-section of a concrete pillar is modeled as a homogeneous circular sample with either three (3) or six (6) “rods” at regular intervals within its volume. The model-space for this problem was strategically constructed using the prior knowledge that the solution would have 3-fold rotational symmetry, and selected M0=27 polynomial basis functions under that constraint. Of the standard Zernike polynomials, the first M0=27 polynomial functions that satisfy the 3-fold symmetry requirement require sampling polynomials up to order n=12. (For example,
Linear (T)SVD is again used to solve the inverse problem. Due to the magnitude of the singular values from the Jacobian matrices for this particular model-space, all inverse solves, both of the sensitivity protocol and the reduced Sheffield protocol, were truncated to 18 singular values. Then, the distinguishability z for various noise levels was calculated to compare the ability to differentiate three (3) and six (6) rods under both protocols. The Gaussian noise with standard deviation η was once again simulated for 1000 different trials. The results in
The present disclosure details a sensitivity analysis for optimizing the EIT inverse-problem for noise robust measurements and improved computational speed. The above analysis demonstrates five steps for the careful selection of model-space and data-space elements, leveraging prior information and introducing the sensitivity parallelotope volume figure of merit. The sensitivity analysis procedure chooses a reduced dimensionality model space in which to solve the EIT problem, determines how many contacts can yield the necessary model resolution or features of interest, defines a measurement-space which contains the sensitivity vectors that describe all possible measurements, selects the optimal data-space measurements as a subset of this measurement space, and finally, solves the inverse problem.
With the proposed sensitivity analysis method, the goal of rapid-refresh EIT inversion is brought within reach because of two operating in tandem. First, the choice of a polynomial basis may greatly reduce the dimensionality of the model-space which, in turn, may greatly reduce the computation time of the problem in comparison to the standard regularization problem with a high density of mesh points. Second, the introduction of the sensitivity parallelotope volume may facilitate the optimization of measurement protocols for maximum sensitivity, resulting in significantly improved robustness to noise in comparison to conventional protocols. In the examples presented to illustrate the methods disclosed herein, we observed a factor of up to 1,000 times improvement in noise tolerance compared to standard EIT protocols.
The functions, acts or tasks illustrated in the Figures or described may be executed in a digital and/or analog domain and in response to one or more sets of logic or instructions stored in or on non-transitory computer readable medium or media or memory. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, microcode and the like, operating alone or in combination. The memory may comprise a single device or multiple devices that may be disposed on one or more dedicated memory devices or disposed on a processor or other similar device. When functions, steps, etc. are said to be “responsive to” or occur “in response to” another function or step, etc., the functions or steps necessarily occur as a result of another function or step, etc. It is not sufficient that a function or act merely follow or occur subsequent to another. The term “substantially” or “about” encompasses a range that is largely (anywhere a range within or a discrete number within a range of ninety-five percent and one-hundred and five percent), but not necessarily wholly, that which is specified. It encompasses all but an insignificant amount.
Other systems, methods, features and advantages will be, or will become, apparent to one with skill in the art upon examination of the figures and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the disclosure, and be protected by the following claims.
This disclosure claims priority to and the benefit of U.S. Provisional Patent Application Ser. No. 63/273,080 titled “Optimized Electrical Impedance Tomography”, filed on Oct. 28, 2021; U.S. Nonprovisional patent application Ser. No. 17/295,318 titled “High Resolution Two-Dimensional Resistance Tomography”, filed on Nov. 29, 2019; and U.S. Provisional Patent Application Ser. No. 62/772,369 titled “High Resolution Two-Dimensional Resistance Tomography”, filed on Nov. 28, 2018; all of which are incorporated by reference in their entirety.
This invention was made with government support under grant numbers DMR1729016 and 1912694 awarded by the National Science Foundation. The government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
63273080 | Oct 2021 | US | |
62772369 | Nov 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17295318 | May 2021 | US |
Child | 18049551 | US |