<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Comparison of statistical inversion with iteratively regularized Gauss Newton method for image reconstruction in electrical impedance tomography</title></titleStmt>
			<publicationStmt>
				<publisher></publisher>
				<date>10/01/2019</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10341446</idno>
					<idno type="doi">10.1016/j.amc.2019.03.063</idno>
					<title level='j'>Applied Mathematics and Computation</title>
<idno>0096-3003</idno>
<biblScope unit="volume">358</biblScope>
<biblScope unit="issue">C</biblScope>					

					<author>Sanwar Ahmad</author><author>Thilo Strauss</author><author>Shyla Kupis</author><author>Taufiquar Khan</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[In this paper, we investigate image reconstruction from the Electrical Impedance Tomography (EIT) problem using a statistical inversion method based on Bayes' theorem and an Iteratively Regularized Gauss Newton (IRGN) method. We compare the traditional IRGN method with a new Pilot Adaptive Metropolis algorithm that (i) enforces smoothing constraints and (ii) incorporates a sparse prior. The statistical algorithm reduces the reconstruction error in terms of 2 and 1 norm in comparison to the IRGN method for the synthetic EIT reconstructions presented here. However, there is a trade-off between the reduced computational cost of the deterministic method and the higher resolution of the statistical algorithm. We bridge the gap between these two approaches by using the IRGN method to provide a more informed initial guess to the statistical algorithm. Our coupling procedure improves convergence speed and image resolvability of the proposed statistical algorithm.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The Electrical Impedance Tomography (EIT) problem involves collecting electrical measurements on the smooth boundary &#8706; to determine the spatially varying electrical conductivity distribution &#947; within the bounded region &#8838; R d (d = 2 , 3) . We assume &#947; has a strictly positive, isotropic and bounded conductivity distribution with no current sources inside . We represent the EIT forward problem with the following elliptic differential equation, div (&#947; &#8711;u ) = 0 in , <ref type="bibr">(1)</ref> where u &#8712; H 1 ( ) is the electric potential and &#947; is known.</p><p>An EIT experiment involves applying an electrical current (Neumann data) I on &#8706; and measuring the resulting electrical potential (Dirichlet data) U on &#8706; . The data collected then provides information on the Neumann-to-Dirichlet (NtD) map. Partial knowledge of the NtD map is used to approximate &#947; from a set of EIT experiments using different input currents <ref type="bibr">[1]</ref><ref type="bibr">[2]</ref><ref type="bibr">[3]</ref> . EIT can be applied to nondestructive testing <ref type="bibr">[4]</ref> , the monitoring of oil and gas mixtures in oil pipelines <ref type="bibr">[5]</ref> , noninvasive medical imaging <ref type="bibr">[6,</ref><ref type="bibr">7]</ref> , etc. For example, EIT has been used in civil engineering to monitor water infiltration into soil <ref type="bibr">[8]</ref><ref type="bibr">[9]</ref><ref type="bibr">[10]</ref><ref type="bibr">[11]</ref> . Stacey <ref type="bibr">[12]</ref> also studied the EIT's feasibility by monitoring moisture movement in Berea sandstone. Hou and Lynch <ref type="bibr">[13]</ref> applied EIT to loaded cementitious composites that were fiber reinforced.</p><p>EIT is well known to suffer from a high-degree of nonlinearity and severe ill-posedness <ref type="bibr">[14,</ref><ref type="bibr">15]</ref> . Therefore, regularization is required to produce reasonable electrical impedance images. Most reconstruction methods are deterministic, such as the factorization method <ref type="bibr">[17]</ref> , d-bar method <ref type="bibr">[18]</ref> , and variational type methods for least-squares fitting. The variational type methods minimize a certain discrepancy functional using Tikhonov-based regularization or an iterative type method of a linearized model or fully nonlinear model such as sparisty cosnstraints <ref type="bibr">[14,</ref><ref type="bibr">15]</ref> , iteratively regularized Gauss Newton method <ref type="bibr">[16]</ref> just to mention a few examples. These analytical methods can be effective in determining specific conductivity distributions, but statistical inversion methods <ref type="bibr">[19]</ref> can offer an alternative approach. In <ref type="bibr">[20]</ref> , <ref type="bibr">Kaipio et al.</ref> optimizes the current patterns based on criteria regarding functionals of the posterior covariance matrix. The Bayesian approach has also been used to study the errors from model reduction and partially unknown geometry <ref type="bibr">[21,</ref><ref type="bibr">22]</ref> . In <ref type="bibr">[23]</ref> , Bardsley quantified uncertainty of MCMC-based image reconstruction. Statistically, the sparsity regularization enforces the p prior on the expansion coefficients for a certain basis like the deterministic approaches for EIT <ref type="bibr">[14,</ref><ref type="bibr">15,</ref><ref type="bibr">24]</ref> . In this paper, we apply the Pilot Adaptive Metropolis algorithm proposed in <ref type="bibr">[25]</ref><ref type="bibr">[26]</ref><ref type="bibr">[27]</ref> to investigate the algorithm's efficacy with synthetic data. We use a TV prior and a combination of a TV prior and a 1 -type prior to solve a Bayesian inverse problem for which the dimension of the problem is neither too high neither too low (approximately 500 unknowns). In <ref type="bibr">[40]</ref> , Lassas and Siltanen describe how the solution to a Bayesian inverse problem diverges when the dimension of the problem converges to infinity; however, for the dimension (approximately 500 unknowns) considered in our problem, we found experimentally that our method converges to a solution. Similarly, in <ref type="bibr">[41]</ref> , Lucka found that, for 1 -type priors, the efficiency of MCMC methods dramatically decreased when the sparsity or dimension of the problem increased, which we confirmed in numerical experiments were only 1 is used as standalone priors. In this study, we find that the our MCMC method converged efficiently when we use a combination of 1 -type and TV priors.</p><p>The paper is organized as follows. In Section 2 , we describe the complete electrode model (CEM) for the EIT problem. In Section 3 , we formulate the inverse problem. We then describe the IRGN method in Section 4 . In Section 5 , we formulate the statistical inverse problem, and we discuss choices for regularizing the prior density. In Section 6 , we discuss the Markov Chain Monte Carlo (MCMC) method and Pilot Adaptive Metropolis algorithm. We compare the inversion results from the statistical approach with the traditional IRGN method in Section 7 to demonstrate the efficacy of our coupled model. Lastly, we conclude with Section 8.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Complete electrode model</head><p>Suppose L electrodes have been fixed around the surface of an object. Current is applied to a subset of these electrodes, and the resulting voltage is measured at all other electrodes. This EIT experiment is repeated several times with different electrode configurations to efficiently characterize the imaged object. The inverse problem then is to reconstruct the conductivity distribution inside the object from a finite set of surface point measurements.</p><p>The Eq. ( <ref type="formula">1</ref>) is used to solve the electric potential at the electrodes and inside for some applied current I . We obtained (1) by a scaling analysis of Maxwell's equation for electromagnetic fields inside of an object <ref type="bibr">[1,</ref><ref type="bibr">28]</ref> . We denote the class of admissible conductivities as</p><p>Currents are applied to electrodes on the surface &#8706; of the body. Let j denote the inward unit normal component for the current density that is produced by current on the surface. Then, the Neumann boundary condition is</p><p>(1) and ( <ref type="formula">2</ref>) are known as the continuum model . In practice, the current density j at the electrodes is unknown, but E l &#947; &#8706;u &#8706;n dS = I l is known, where n is the unit outward normal to , E l is the surface area of the l th electrode and I l is the current injected into E l . Thus the Neumann condition (2) can be rewritten as</p><p>Furthermore, we know j = 0 for the current density on the boundary between the electrodes, such that</p><p>The potential on the electrodes is considered to be constant: U l = constant . This property is known as the shunting effect , which is represented by</p><p>There is also an electro-chemical effect due to the formation of a thin and highly resistive layer between the electrodes and the body. Electrical impedance from this layer, z l , is called the effective contact impedance or surface impedance at E l . This effect changes <ref type="bibr">(5)</ref> to</p><p>The complete electrode model (CEM) consists of (1), ( <ref type="formula">3</ref>), ( <ref type="formula">4</ref>) and ( <ref type="formula">6</ref>) , together with the following conditions.</p><p>Although the CEM has a unique solution, the accuracy of it is determined by the predicted experimental measurements <ref type="bibr">[29]</ref> .</p><p>For the FEM solution to the forward CEM model, we reformulate the variational form of CEM (see <ref type="bibr">[28]</ref> for details) as</p><p>where f (v , V ) = L l=1 I l V l is a functional that maps H &#8594; R . Existence and uniqueness of a solution ( u , U ) &#8712; H can be shown using the Lax-Milgram lemma. In <ref type="bibr">[29]</ref> , it is shown that ( u , U ) satisfies the CEM conditions if and only if it satisfies (9) .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">FEM discretization of CEM</head><p>Following the typical FEM approach, let T = { T 1 , . . . , T | T | } be the triangulation of , which has N mesh points for the finite dimensional subspace</p><p>. We must determine the coefficients &#945; i and &#946; k in this formulation. Choosing v = &#966; i and V = &#957; k when the set of test functions is of the form (&#966; 1 , 0) , . . . , (&#966; N , 0) , (&#957; 1 , 0) , . . . , (&#957; L -1 , 0) results in the following system of equations in matrix form: &#710; A &#952; = f, <ref type="bibr">(10)</ref> where &#952; = (&#945;, &#946; ) T &#8712; R N+ L -1 , the matrix &#710; A is the standard FEM system matrix (see <ref type="bibr">[28]</ref> for details) and the right hand side f is given by,</p><p>For a fixed current vector I and positive contact impedances</p><p>to be the forward operator that maps the conductivity &#947; to the solution of the forward problem obtained by FEM described above. <ref type="bibr">[30]</ref> showed that this operator is Fr&#232;chet differentiable. Recall that Fr&#232;chet differentiability of F in &#947; means that lim</p><p>The forward problem uses the known conductivity distribution &#947; to find the boundary data associated with a given source.</p><p>Our goal is to estimate the conductivity distribution &#947; from all pairs of current vectors I &#8712; &#732; R L and resulting voltage vectors U &#8712; &#732; R L . We minimize the cost functional to recover &#947; since the inverse problem is severely ill-posed from the finite set of measurements.</p><p>where u &#948; approximates the exact data u with the accuracy &#948;, i.e.,</p><p>However, regularization is needed to improve the ill-posed problem and instead, we minimize,</p><p>where &#955; is the regularization parameter, R ( &#8226; ) is the regularization term and &#947; * is the known background. There are several choices for R ( &#8226; ). The p regularization R p (&#947;&#947; * ) is defined as</p><p>where 0 &lt; p &#8804; 2 is a constant, <ref type="bibr">[19]</ref> . The p regularization enforces sparsity for 0 &lt; p &#8804; 1 and smoothness when p &#8805; 2. Total variation regularization is used for most practical applications to obtain smooth images. Total variation is defined as <ref type="bibr">[42,</ref><ref type="bibr">43]</ref> ,</p><p>where | &#8226; |( ) is a finite Radon measure. In particular, for &#947; &#8712; W 1,1 ( ), it reduces to the standard notation for TV regularization,</p><p>The regularization function is represented by a norm for most analytical methods. In this paper we used one of the most successful methods for solving the ill-conditioned problem, R 2 known as Tikhonov regularization. The cost functional from Tikhonov regularization is</p><p>where W is weight function and upon discretization becomes a weight matrix. There are several iterative approaches to minimize <ref type="bibr">(13)</ref> . In this paper, we used a modified iteratively regularized Gauss-Newton (IRGN) method for the minimization, which is described in the next section.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Iteratively regularized Gauss-Newton method</head><p>Suppose &#955; k is some sequence of regularizing parameters satisfying the conditions</p><p>Let the unique global minimum of (13) be denoted by &#732; &#947; . Assume &#732; &#947; satisfies the invertibility conditions and the conditions for F as described in <ref type="bibr">[31]</ref> . Then, the unique global minimum of ( <ref type="formula">13</ref>) is explicitly given by &#732;</p><p>where &#947; k is the k th approximation to &#947; and F (&#947; k ) is the Jacobian matrix at the k th iteration, and W 2 = W T W . The above algorithm is generalized further using a line search procedure from Smirnova et al. <ref type="bibr">[31]</ref> . A variable step size,</p><p>The modified IRGN algorithm is then</p><p>The line search parameter s k is chosen to minimize the scalar objective function</p><p>where p k is the search direction, which solves</p><p>This step is accomplished through a backtracking strategy until either one of the strong Wolfe conditions,</p><p>is satisfied <ref type="bibr">[33]</ref> , or the maximum number of backtracking steps has been reached. We use the theoretically derived values of c 1 = 0 . 0 0 01 and c 2 = 0 . 9 , derived in <ref type="bibr">[33]</ref> . Due to the inexact nature of u &#948; , we adopt a stopping rule from Bakushinsky and Smirnova <ref type="bibr">[32]</ref> to terminate the iterations at the first index K = K(&#948;) , such that</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Derivative of the forward operator</head><p>Computing the Jacobian of the forward operator is required for most iterative Newton methods. If the regularity assumptions on the domain and the coefficients are satisfied, the forward operator is differentiable. The operator F is Fr&#232;chet differentiable. It maps &#947; &#8712; int( Q ) to the solution ( u , U ) &#8712; H of the forward problem with current vector</p><p>Using <ref type="bibr">(23)</ref> , the Jacobian has been derived in <ref type="bibr">[30]</ref> .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">The statistical inverse problem</head><p>Inverse problems are typically written in terms of a minimization problem. In this section, we write the inverse problem in terms of the posterior density of the conductivity &#947; given the measurements U on &#8706; to obtain the expected value of the conductivity given the surface measurements. This estimate is a reasonable point estimate to the ill-posed inverse problem. In the following, we formulate the finite dimensional posterior density for the conductivity distribution given the finite dimensional measurement on &#8706; . Let &#948; L represent the L dimensional measurement noise corresponding to the measurements U L . Denote &#947; * M , U * L and &#948; * L as the random variables for &#947; M , U L and &#948; L , respectively.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">The posterior density</head><p>For the posterior density &#960; &#947; * M (&#947; M | U L , &#948; L ) , we first assume the EIT forward operator ( &#947; M , &#948; L ) is well posed, such that</p><p>with the property that R m (x ) </p><p>from ( <ref type="formula">24</ref>) and &#960; X (x</p><p>&#960; Y (y ) . Here, the x and y are values of the random variables X and Y , respectively. By integrating the measurement noise &#948; L out of Eq. ( <ref type="formula">25</ref>) , we can then obtain the joint density of &#947; * M and</p><p>We assume that the measurement noise</p><p>, and that it is additive, that is U L = (&#947; M ) + &#948; L . Eq. ( <ref type="formula">27</ref>) can now be simplified to</p><p>The posterior density of &#947; * M given the measurements</p><p>from the conditional probability formula. Note that &#960;</p><p>The posterior density can be defined by choosing a density for the measurement noise &#948; * L and a prior density function for &#947; * M . We assume data with random Gaussian noise,</p><p>where C is a positive definite covariance matrix. The density of &#947; * M is the prior density because it contains prior knowledge about &#947; * M . We also assume</p><p>where R ( &#8226; ) is a regularization function, &#945; &gt; 0 is a constant, and &#967; A ( &#947; M ) a indicator function with A = [0 , &#8734; ) n .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Prior density as a regularization function</head><p>In this section, we discuss several choices for the regularization function R ( &#8226; ). We can apply most regularization functions in both the statistical and the analytical settings. However, the statistical setting typically allows to chose from more general regularizations then its analytical counterpart.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.1.">The p prior</head><p>The p regularization R p (&#947; M ) is defined as</p><p>where c i represent the weights, &#947; b M is a typical background conductivity and 0 &lt; p &#8804; 2 is a constant <ref type="bibr">[19]</ref> . R p (&#947; M ) is a norm if p &#8805; 1 and only defines a metric when 0 &lt; p &lt; 1. The regularization function is represented by a norm for most analytical methods, but the statistical reconstruction can also handle cases when 0 &lt; p &lt; 1. In theory, a good choice for the weights c i are large values at the boundary and exponentially decreasing values towards the center of because the variance is smaller on the boundary than in the center <ref type="bibr">[34]</ref> . The p regularization enforces sparsity when 0 &lt; p &#8804; 1 and enforces smoothness when p &#8805; 2.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.2.">The total variation prior</head><p>Total variation regularization is used to obtain smooth images, which is needed for most practical applications. The total variation regularization for an infinite dimensional space is defined as R T V c (&#947; ) as previously defined with &#947; as the infinite dimensional version of &#947; M . The discrete analogue for a two-dimensional body of the total variation regularization R T V c <ref type="bibr">[19]</ref> is</p><p>where l i is edge length of the i th adjacent triangle and * i = (0 , 0 , . . . , 0 , 1 a (1 ,i ) , 0 , . . . , 0 , -1 a (2 ,i ) , 0 , . . . , 0) , <ref type="bibr">(38)</ref> where a = { a ( j,i ) } h i =1 , j&#8712;{ 1 , 2 } is the set of the numbers for all adjacent triangle tuples ( a (1, i ) , a (2, i ) ).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Combination of the p and total variation priors</head><p>In general, we can choose &#960; &#947; * M (&#947; M ) to be any kind of prior density on instead of using the classical analytical choices. We recommend choosing a continuous regularization function for &#947; M that is integrable over for proper reconstructions.</p><p>The p and total variation prior can provide better inversion results when combined together <ref type="bibr">[26]</ref> ,</p><p>where &#945; 1 , &#945; 2 &gt; 0 are constants. For proper choice of &#945; 1 , &#945; 2 &gt; 0, this regularization function can provide smooth images that give the correct background conductivity.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">The Markov Chain Monte Carlo method</head><p>We have discussed the posterior density with several meaningful prior densities or regularization functions. Now, we can find the Bayesian estimate for &#947; * M based on the measurements</p><p>There is no direct method for finding the Bayesian estimate</p><p>have a closed form. However, we can use the Markov Chain Monte Carlo Method (MCMC) to generate a large set of random result from Gelman et al. <ref type="bibr">[37]</ref> . For a normal target and proposal distribution, the optimal acceptance ratio is approximately .45 in the one dimensional case and .234 when the number of dimension converges to infinity <ref type="bibr">[37]</ref> . Suppose we perform t 0 adaptions, one every t r iterations, where 1 &lt; t r t 0 &lt; B * &lt; N . Let c i denote a variable recording if the i th iteration of this algorithm has been accepted, c i := 1 , i th iteration has been accepted, 0 , else.</p><p>The estimator for the acceptance ratio for the k th proposal distribution is </p><p>Informally speaking, the algorithm modifies the covariance matrix in the pilot time t r t 0 to converge closely to the optimal acceptance ratio. Then, the standard Metropolis-Hastings algorithm begins with the latest state and proposal distribution of the pilot time.</p><p>In Algorithm 1 , the pilot adaptive Metropolis algorithm is recapitulated with an arbitrary starting state x (0) &#8712; E and a starting guess for the positive definite covariance matrix C 0 . The chain only satisfies the Markov property after the last adaption at time t r t 0 in Algorithm 1 . The chain usually still moves towards the high probability areas of the target distribution during the pilot time <ref type="bibr">[25]</ref><ref type="bibr">[26]</ref><ref type="bibr">[27]</ref><ref type="bibr">35</ref>] .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Numerical experiments</head><p>We generate synthetic data for our simulations with a known conductivity distribution &#947; . We proceed to solve the forward problem using &#947; in the finite element Galerkin method. For each case, we had two different discrete conductivity distributions inside a circular domain . We used | T | = 4128 triangles and N = 2129 linear basis functions for u and &#947; on . There are L = 16 equally spaced electrodes placed along the boundary with each electrode covering a surface area of 5mm. Next, we added random Gaussian noise with a standard deviation that is 1% and 3% of the maximum measurement, respectively. In order to avoid the inverse crime, we reconstructed &#947; on with a smaller mesh size consisting of 1032 triangles with 549 mesh points.</p><p>Our simulations specifications are listed in the following table, which includes the known &#947; with a residual &#948; in <ref type="bibr">(22)</ref> and a stopping rule for choosing &#961; in <ref type="bibr">(12)</ref> .</p><p>The sequence of step lengths s k is chosen through a backtracking strategy, s 1 , s 1 / 2 , . . . until either the strong Wolf condition from <ref type="bibr">(20)</ref> or ( <ref type="formula">21</ref>) is satisfied. The maximum number of backtracking steps was set to 16. We also imposed</p><p>)10 -3 to prevent singularity in the numerical computation.</p><p>No backtracking is possible if the sequence of regularization parameter &#955; k decreases too quickly. We define &#955; k = &#955; 1 c c+ k -1</p><p>with &#955; 1 = 1 , c = 4 , to provide us with &#710; d = 1 . 25 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Example 1. Single inclusion</head><p>The true conductivity consists of a homogeneous background and one circular inclusion of radius 0.01 mm centered at (0.0242 mm, 0.015 mm). The conductivities of the background and the inclusions are 7 &#8226; 10 -4 Ohm and 10 -8 Ohm , respectively, as shown in Fig. <ref type="figure">1</ref> . The reconstructions with 1% and 3% noise levels are shown in Fig. <ref type="figure">2</ref> (a) and 2 (c) for the IRGN method and Fig. <ref type="figure">2 (b</ref>) and 2(d) for the statistical inversion method.</p><p>Inversion results from the IRGN method are smooth from Tikhonov regularization. The inclusion location is effectively captured, but its support is slightly larger than the true inclusion. In particular, its significantly extended towards the center of when the noise level is higher at 3%. The magnitude of the inclusion conductivity is overestimated by 10 -4 Ohm compared to the true value of 10 -8 Ohm . In contrast, the reconstruction by the statistical inversion method is more localized at the true location with a reasonably homogeneous background. However, with a higher noise level, the background conductivity starts to get distorted. We computed the 1 and 2 reconstruction errors as</p><p>, where &#947; T is the true conductivity distribution.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Example 2. Double inclusion</head><p>The true conductivity consists of a homogeneous background and double circular inclusions of radius 0.01mm centered at ( &#177; 0.036mm, 0mm). The conductivities of the background and the inclusions are 7 &#8226; 10 -4 Ohm and 10 -8 Ohm , respectively, as shown in Fig. <ref type="figure">3</ref> . The reconstructions with 1% and 3% noise levels are shown in Fig. <ref type="figure">4</ref> (a) and 4 (c) for the IRGN method and Fig. <ref type="figure">4</ref> (b) and 4 (d) for the statistical inversion method. We observe that both methods are able to retrieve the inclusions. In IRGN method, the supports of the inclusions are extended towards the center of , and the magnitude of the background conductivity is slightly overestimated. However, the statistical approach provides a more localized solution with a sharper      background. We list the reconstruction errors for &#947; in Table <ref type="table">3</ref> , which shows that the errors e 1 and e 2 are smaller for the statistical inversion method.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Example 3. Quadruple inclusion</head><p>The true conductivity consists of a homogeneous background and double circular inclusions of radius 0.01 mm centered at ( &#177; 0.036 mm, 0 mm) and (0 mm, &#177; 0.036 mm). The conductivities of the background and the inclusions are 7 &#8226; 10 -4 Ohm and 10 -8 Ohm , respectively, as shown in Fig. <ref type="figure">5</ref> . Multiple inclusions are challenging for most numerical algorithms. However, both the approaches we used in this paper produce reasonable reconstructions of &#947; for different noise levels. As expected, with higher noise level and more inclusions, their support becomes larger in IRGN method, eventually making them barely observable. On the other hand, the statistical inversion does produce significantly sharper images with more localized inclusions even for a higher noise level.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusions</head><p>We constructed the EIT forward model using the CEM as the Galerkin finite element approximation and then discussed the setup for the linearized inverse problem. We discussed several regularization functions for the statistical inverse problem. We compared these general regularization functions for the statistical inverse problem to the regularization function used in the deterministic setting. We obtained a Bayes' estimate for the electrical conductivity from the Pilot Adaptive Metropolis algorithm that implements a smoothing criteria and enforces sparsity in the prior distribution. We used both the p and total variation priors for regularization in the Pilot Adaptive Metropolis algorithm. We found that coupling sparsity regularization and smoothness regularization improved performance more than the case when only sparsity constraints or smoothness constraints were used.</p><p>The smoothness regularization provided regularity in the solution while the sparsity approach reduced the ill-posed nature of the inverse problem. Both criteria resulted in electrical impedance images with higher resolution. We determined that sparsity and smoothness regularization are needed during EIT inversion for improved image reconstruction. For this study, the statistical algorithm provided better reconstructions compared to IRGN in terms of both 2 and 1 errors. One drawback to the statistical approach is that it can be computationally expensive to run. For low dimensional cases, the running times were about 40-45 min when we used the deterministic method as an initial guess.</p><p>Additional disadvantages of the statistical algorithm and IRGN method are their dependencies on the (1) location of the inclusions and (2) data noise. Reconstruction errors are higher as data noise increases and the location of inclusions move further inside the object. This result is a direct consequence of the ill-posed inverse problem. We avoided this effect by creating inclusions closer to the surface for higher data resolution and using a relatively low noise level. Most experimental noise for EIT does not exceed 10%, but we do not know the noise distribution. Our EIT data is synthetic and produced from a controlled environment. We will continue our work to determine if we can reconstruct high resolution images when the noise distribution is unknown. Our IRGN method and statistical algorithm will be tested further with experimental data. Overall, we found that the statistical approach is a promising and computationally efficient algorithm when it uses the inverted results from the IRGN as an initial starting point.</p></div></body>
		</text>
</TEI>
