This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Electrical Impedance Tomography: A Fair Comparative Study on Deep Learning and Analytic-based Approaches

Derick Nganyu Tanyu Jianfeng Ning Andreas Hauptmann
Bangti Jin
Peter Maass
Abstract

Electrical Impedance Tomography (EIT) is a powerful imaging technique with diverse applications, e.g., medical diagnosis, industrial monitoring, and environmental studies. The EIT inverse problem is about inferring the internal conductivity distribution of an object from measurements taken on its boundary. It is severely ill-posed, necessitating advanced computational methods for accurate image reconstructions. Recent years have witnessed significant progress, driven by innovations in analytic-based approaches and deep learning. This review comprehensively explores techniques for solving the EIT inverse problem, focusing on the interplay between contemporary deep learning-based strategies and classical analytic-based methods. Four state-of-the-art deep learning algorithms are rigorously examined, including the deep D-bar method, deep direct sampling method, fully connected U-net, and convolutional neural networks, harnessing the representational capabilities of deep neural networks to reconstruct intricate conductivity distributions. In parallel, two analytic-based methods, i.e., sparsity regularisation and D-bar method, rooted in mathematical formulations and regularisation techniques, are dissected for their strengths and limitations. These methodologies are evaluated through an extensive array of numerical experiments, encompassing diverse scenarios that reflect real-world complexities. A suite of performance metrics is employed to assess the efficacy of these methods. These metrics collectively provide a nuanced understanding of the methods’ ability to capture essential features and delineate complex conductivity patterns.

One novel feature of the study is the incorporation of variable conductivity scenarios, introducing a level of heterogeneity that mimics textured inclusions. This departure from uniform conductivity assumptions mimics realistic scenarios where tissues or materials exhibit spatially varying electrical properties. Exploring how each method responds to such variable conductivity scenarios opens avenues for understanding their robustness and adaptability.

1 Introduction and motivation

This paper investigates deep learning concepts for the continuous model of electrical impedance tomography (EIT). EIT is one of the most intensively studied inverse problems, and there already exists a very rich body of literature on various aspects [14, 104]. EIT as an imaging modality is of considerable practical interest in noninvasive imaging and non-destructive testing. For example, the reconstruction can be used for diagnostic purposes in medical applications, e.g. monitoring of lung function, detection of cancer in the skin and breast and location of epileptic foci [51]. Similarly, in geophysics, one uses electrodes on the surface of the earth or in boreholes to locate resistivity anomalies, e.g. minerals or contaminated sites, and it is known as geophysical resistivity tomography in the literature.

Since its first formulation by Calderón [20], the issue of image reconstruction has received enormous attention, and many reconstruction algorithms have been proposed based on regularised reconstructions, e.g., Sobolev smoothness, total variation and sparsity. Due to the severe ill-posed nature of the inverse problem and the high degree of non-linearity of the forward model, the resolution of the obtained reconstructions has been modest at best. Nonetheless, the last years have witnessed significant improvement in the EIT reconstruction regarding resolution and speed. This impressive progress was primarily driven by recent innovations in deep learning, especially deep neural network architectures, high-quality paired training data, efficient training algorithms (e.g., Adam), and powerful computing facilities, e.g., graphical processing units (GPUs).

This study aims to comprehensively and fairly compare deep learning techniques for solving the EIT inverse problem. This study has several sources of motivation. First, the classical, analytical setting of EIT is severely ill-posed, to such an extent that it allows only rather sketchy reconstructions when employing classical regularisation schemes. Unless one utilises additional a priori information, there is no way around the ill-posedness. This has motivated the application of learning concepts in this context. Incorporating additional information in the form of typical data sets and ground truth reconstructions allows constructing an approximation of a data manifold specific to the task at hand. The structures that distinguish these manifolds are typically hard to capture by explicit physical-mathematical models. To some extent, TV- or sparsity-based Tikhonov functionals exploit these features. However, learning the prior distribution from sufficiently large sets of training data potentially offers much greater flexibility than these hand-crafted priors. Second, there already exists a growing and rich body of literature on learned concepts for EIT; see, e.g., the recent survey [65] and Section 3 for a detailed description of the state of the art. Nevertheless, most of these works focus on their own approaches, typically showing their superiority compared to somewhat standard and basic analytical methods. In contrast, we aim at a fair and more comprehensive comparison of different learned concepts and include a comparison with two advanced analytical methods (i.e., D-bar and sparsity methods).

It is worth mentioning that inverse problems pose a particular challenge for learned concepts due to their inherent instability. For example, directly adapting well-established network architectures, which have been successfully applied to computer vision or imaging problems, typically fail for inverse problems, e.g., medical image reconstruction tasks. Hence, such learned concepts for inverse parameter identification problems are most interesting in terms of developing an underlying theory and the performance on practical applications. Indeed, the research on learned concepts for inverse problems has exploded over the past years, see e.g. the review [5] and the references cited therein for a recent overview of the state of the art. Arguably, the two most prominent fields of application for inverse problems are PDE-based parameter identification problems and tasks in tomographic image reconstruction. These fields actually overlap, e.g. when it comes to parameter identification problems in PDE-based multi-physics models for imaging. The most common examples in tomography are X-ray tomography (linear) and EIT (non-linear). Hence, one may also regard this study as being prototypical of how deep learning concepts should be evaluated in the context of non-linear PDE inverse problems.

The rest of the paper is organised as follows. In Section 2, we describe the continuum model for EIT, and also two prominent analytic-based approaches for EIT reconstruction, i.e., sparsity and D-bar method. Then, in Section 3, we describe four representative deep learning-based approaches for EIT imaging. Finally, in Section 4, we present an extensive array of experiments with a suite of performance metrics to shed insights into the relative merits of the methods. We conclude with further discussions in Section 5.

2 Electrical impedance tomography

Mathematically speaking, the continuous EIT problem aims at determining a spatially-varying electrical conductivity σ\sigma within a bounded domain Ω\Omega by using measurements of the electrical potential on the boundary Ω\partial\Omega. The basic mathematical model for the forward problem is the following elliptic PDE:

div(σu)=0, in Ω,-\mbox{div}(\sigma\nabla u)=0,\quad\mbox{ in }\Omega, (1)

subject to a Neumann boundary condition σun=j\sigma\frac{\partial u}{\partial n}=j on Ω\partial\Omega, which satisfies a compatibility condition ΩjdS=0\int_{\partial\Omega}j{\rm d}S=0. An EIT experiment consists of applying an electrical current jj on the boundary Ω\partial\Omega and measuring the resulting electrical potential ϕ=u|Ω\phi=u|_{\partial\Omega} on Ω\partial\Omega. The Neumann to Dirichlet (NtD) operator Λσ,N:jϕ\Lambda_{\sigma,N}:j\mapsto\phi maps a Neumann boundary condition jj to the Dirichlet data ϕ=u|Ω\phi=u|_{\partial\Omega} on Ω\partial\Omega.

In practice, several input currents are injected, and the induced electrical potentials are measured; see [27, 57] for discussions on the choice of optimal input currents. This data contains information about the underlying NtD map Λσ,N\Lambda_{\sigma,N}. The inverse problem is to determine or at least to approximate the true unknown physical electrical conductivity σ\sigma^{\dagger} from a partial knowledge of the map. This inverse problem was first formulated by Calderón [20], who also gave a uniqueness result for the linearised problem. The mathematical theory of uniqueness of the inverse problem with the full NtD map Λσ,N\Lambda_{\sigma,N} has received enormous attention, and many profound theoretical results have been obtained. For an in-depth overview of uniqueness results, we refer to the monograph [55] and survey [104].

2.1 Theoretical background

This section introduces the mathematical model of the EIT problem and the discrepancy functional used for reconstructing the conductivity σ\sigma. Let Ω\Omega be an open-bounded domain in d(d2)\mathbb{R}^{d}\ (d\geq 2) with a Lipschitz boundary Ω\partial\Omega, and let Λσ,N\Lambda_{\sigma,N} denote the NtD map of problem (1). We employ the usual Sobolev space for the Neumann boundary data σun=jH~12(Ω)\sigma\frac{\partial u}{\partial n}=j\in\tilde{H}^{-\frac{1}{2}}(\partial\Omega), respectively Dirichlet boundary condition u=ϕH~12(Ω)u=\phi\in\tilde{H}^{\frac{1}{2}}(\partial\Omega) on Ω\partial\Omega. Throughout, we make use of the space H~1(Ω)\tilde{H}^{1}(\Omega), which is a subspace of the Sobolev space H1(Ω)H^{1}(\Omega) with vanishing mean on Ω\partial\Omega, i.e., H~1(Ω)={vH1(Ω):Ωvds=0}\tilde{H}^{1}(\Omega)=\{v\in H^{1}(\Omega):\int_{\partial\Omega}v{\rm d}s=0\}. The spaces H~12(Ω)\tilde{H}^{\frac{1}{2}}(\partial\Omega) and H~12(Ω)\tilde{H}^{-\frac{1}{2}}(\partial\Omega) are defined similarly. These spaces are equipped with the usual norms. We normalise the solution of the Neumann problem by enforcing Ωuds=0\int_{\partial\Omega}u{\rm d}s=0, so that there exists a unique solution uH~1(Ω)u\in\tilde{H}^{1}(\Omega). We denote the Dirichlet-to-Neumann (DtN) map by Λσ,D\Lambda_{\sigma,D}. Then we have Λσ,N=Λσ,D1\Lambda_{\sigma,N}=\Lambda_{\sigma,D}^{-1}, i.e., DtN and NtD maps are inverse to each other. In usual regularised reconstruction, we employ the NtD map Λσ,N\Lambda_{\sigma,N}, whereas in the D-bar method, we employ the DtN map Λσ,D\Lambda_{\sigma,D}.

An EIT experiment consists of applying a current jj and measuring the resulting potential ϕ\phi on Ω\partial\Omega, and it is equivalent to solving a Neumann forward problem with the physical conductivity σ\sigma^{\dagger}, i.e. ϕ=Λσ,Nj\phi=\Lambda_{\sigma,N}j, on Ω\partial\Omega. In practice, the boundary potential measurements are collected experimentally, and thus ϕ\phi is only an element of the space L2(Γ)L^{2}(\Gamma). see e.g. [22]. Note that the continuum model is mostly academic. A more realistic model is the so-called complete electrode model (CEM) for EIT [100, 53], which models contact impedances and localised electrode geometries. The CEM is finite-dimensional by construction, leading to different mathematical challenges and reconstruction methods.

The solvability, uniqueness and smoothness of the continuum model with respect to LpL^{p} norms can be derived using Meyers’ gradient estimate [80], as in [92].

Theorem 2.1.

Let Ω\Omega be a bounded Lipschitz domain in d(d2)\mathbb{R}^{d}\ (d\geq 2). Assume that σL(Ω)\sigma\in L^{\infty}(\Omega) satisfies λ<σ<λ1\lambda<\sigma<\lambda^{-1} for some fixed λ(0,1)\lambda\in(0,1). For f(Lq(Ω))df\in(L^{q}(\Omega))^{d} and hLq(Ω)h\in L^{q}(\Omega), let uH1(Ω)u\in H^{1}(\Omega) be a weak solution of

div(σu)=div(f)+hinΩ.-\mathrm{div}(\sigma\nabla u)=-\mathrm{div}(f)+h\quad\mathrm{in}\ \Omega.

Then, there exists a constant Q(2,+)Q\in(2,+\infty) depending on λ\lambda and dd only, Q2Q\rightarrow 2 as λ0\lambda\rightarrow 0 and QQ\rightarrow\infty as λ1\lambda\rightarrow 1, such that for any 2<q<Q2<q<Q, we obtain uWloc1,q(Ω)u\in W_{\mathrm{loc}}^{1,q}(\Omega) and for any Ω1Ω\Omega_{1}\subset\subset\Omega

uW1,q(Ω1)C(uH1(Ω)+fLq(Ω)+hLq(Ω)),\|u\|_{W^{1,q}(\Omega_{1})}\leq C(\|u\|_{H^{1}(\Omega)}+\|f\|_{L^{q}(\Omega)}+\|h\|_{L^{q}(\Omega)}),

where the constant CC depends on λ\lambda, dd, qq, Ω1\Omega_{1} and Ω\Omega.

In Theorem 2.1, the boundary condition for the problem can be general. Its effect enters the W1,qW^{1,q}-estimate through the term uH1(Ω)\|u\|_{H^{1}(\Omega)}. In addition, no regularity has been assumed on σ\sigma. Generally, a precise estimate of the constant Q(λ,d)Q(\lambda,d) is missing, but in the 2D case, a fairly sharp estimate of Q(λ,d)Q(\lambda,d) was derived in [6].

2.2 Conventional EIT reconstruction algorithms

EIT suffers from a high degree of non-linearity and severe ill-posedness, as typical of many PDE inverse problems with boundary data. However, its potential applications have sparked much interest in designing effective numerical techniques for its efficient solution. Numerous numerical methods have been proposed in the literature; see [14, Section 7] for an overview (up to 2002). These methods can roughly be divided into two groups: regularised reconstruction and direct methods. Below, we give a brief categorisation of conventional reconstruction schemes.

The methods in the first group are of variational type, i.e., based on minimising a certain discrepancy functional. Commonly the discrepancy JJ is the standard least-squares fitting, i.e., the squared L2(Ω)L^{2}(\partial\Omega) norm of the difference between the electrical potential due to the applied current jj and the measured potential ϕ\phi:

J(σ)=12Λσ,NjϕδL2(Ω)2,J(\sigma)=\tfrac{1}{2}\|\Lambda_{\sigma,N}j-\phi^{\delta}\|_{L_{2}(\partial\Omega)}^{2},

for one single measurement (j,ϕδ)(j,\phi^{\delta}). One early approach of this type is given in [28], which applies one step of a Newton method with a constant conductivity as the initial guess. Due to the severe ill-posedness of the problem, regularisation is beneficial for obtaining reconstructions with improved resolution [35, 94, 56]. Commonly used penalties include Sobolev smoothness [78, 58] for a smooth conductivity distribution, total variation [50], Mumford-Shah functional [92], level set method [30] for recovering piecewise constant conductivity, sparsity [40, 60, 59] for recovering small inclusions (relative to the background). The combined functional is given by

Ψ(σ)=J(σ)+αR(σ),\Psi(\sigma)=J(\sigma)+\alpha R(\sigma),

where R(σ)R(\sigma) denotes the penalty, and α>0\alpha>0 is the penalty weight. The functional Ψ(σ)\Psi(\sigma) is then minimised over the admissible set

𝒜={σL(Ω):λσλ1 a.e. Ω},\mathcal{A}=\{\sigma\in L^{\infty}(\Omega):\lambda\leq\sigma\leq\lambda^{-1}\mbox{ a.e. }\Omega\},

for some λ(0,1)\lambda\in(0,1). The set 𝒜\mathcal{A} is usually equipped with an Lp(Ω)L^{p}(\Omega) norm (1p)(1\leq p\leq\infty). One may also employ data fitting other than the standard L2(Ω)L^{2}(\partial\Omega)-norm. The most noteworthy one is the Kohn-Vogelius approach, which lifts the boundary data to the domain Ω\Omega and makes the fitting in Ω\Omega [107, 69, 15]; see also [67] for a variant of the Kohn-Vogelius functional. In practice, the regularized formulations have to be properly discretized, commonly done by means of finite element methods [39, 91, 62, 61], due to the spatially variable conductivity and irregular domain geometry. Newton-type methods have also been applied to EIT [71, 72]. Probabilistic formulations of these deterministic approaches are also possible [64, 38, 34, 12], which can provide uncertainty estimates on the reconstruction.

The methods in the second group are of a more direct nature, aiming at extracting relevant information from the given data directly, without going through the expensive iterative process. Bruhl et al [18, 19] developed the factorisation method for EIT, which provides a criterion for determining whether a point lies inside or outside the set of inclusions by carefully analysing the spectral properties of certain operators. Thus, the inclusions can be reconstructed directly by testing every point in the computational domain. The D-bar method of Siltanen, Mueller and Isaacson [98, 82] is based on Nachman’s uniqueness proof [83] and utilises the complex geometric solutions and nonphysical scattering transform for direct image reconstruction. Chow, Ito and Zou [29] proposed the direct sampling method when there are only very few Cauchy data pairs. The method employs dipole potential as the probing function and constructs an indicator function for imaging the inclusions in EIT, and it is easy to implement and computationally cheap. Other notable methods in the group include monotonicity method [47], enclosure method [54], Calderón’s method [11, 96], and MUSIC [3, 2, 70] among others. Generally, direct methods are faster than those based on variational regularisation, but the reconstructions are often inferior in terms of resolution and can suffer from severe blurring.

These represent the most common model-based inversion techniques for EIT reconstruction. Despite these important progress and developments, the quality of images produced by EIT remains modest when compared with other imaging modalities. In particular, at present, EIT reconstruction algorithms are still unable to extract sufficiently useful information from data to be an established routine procedure in many medical applications. Moreover, the iterative schemes are generally time-consuming, especially for 3D problems. One possible way of improving the quality of information is to develop an increased focus on identifying useful information and fully exploiting a priori knowledge. This idea has been applied many times, and the recent advent of deep learning significantly expanded its horizon from hand-crafted regularisers to more complex and realistic learned schemes. Indeed, recently, deep learning-based approaches have been developed to address these challenges by drawing on knowledge encoded in the dataset or structural preference of the neural network architecture.

We describe the sparsity approach and D-bar method next, and deep learning approaches in Section 3.

2.3 Sparsity-based method

The sparsity concept is very useful for modelling conductivity distributions with “simple” descriptions away from the known background σ0\sigma_{0}, e.g. when σ\sigma consists of an uninteresting background plus some small inclusions. Let δσ=σσ0\delta\sigma^{\dagger}=\sigma^{\dagger}-\sigma_{0}. A “simple” description means that δσ\delta\sigma has a sparse representation with respect to a certain basis/frame/dictionary {ψk}\{\psi_{k}\}, i.e., there are only a few non-zero expansion coefficients. The 1\ell^{1} norm δσ\delta\sigma can promote the sparsity of δσ\delta\sigma [33]

Ψ(σ)=J(σ)+αδσ1,withδσ1=k|δσ,ψk|.\Psi(\sigma)=J(\sigma)+\alpha\|\delta\sigma\|_{\ell^{1}},\quad\mbox{with}\quad\|\delta\sigma\|_{\ell^{1}}=\sum_{k}|\langle\delta\sigma,\psi_{k}\rangle|. (2)

Under certain regularity conditions on {ψk}\{\psi_{k}\}, the problem of minimising Ψ\Psi over the set 𝒜\mathcal{A} is well-posed [58].

Optimisation problems with the 1\ell^{1} penalty have attracted intensive interest [33, 13, 17, 108]. The challenge lies in the non-smoothness of the 1\ell^{1}-penalty and high-degree nonlinearity of the discrepancy J(σ)J(\sigma). The basic algorithm for updating the increment δσi\delta\sigma_{i} and σi=σ0+δσi\sigma_{i}=\sigma_{0}+\delta\sigma_{i} by minimising Ψ\Psi formally reads

δσi+1=𝒮sα(δσisΛσi,N(Λσi,Njϕδ)),\delta\sigma_{i+1}=\mathcal{S}_{s\alpha}(\delta\sigma_{i}-s\Lambda_{\sigma_{i},N}^{\prime\ast}(\Lambda_{\sigma_{i},N}j-\phi^{\delta})),

where s>0s>0 is the step size, Λσi,N\Lambda_{\sigma_{i},N}^{\prime} denotes the Gâteaux derivative of the NtD map Λσi,N\Lambda_{\sigma_{i},N} in σ\sigma, and 𝒮λ(t)=sign(t)max(|t|λ,0)\mathcal{S}_{\lambda}(t)=\mbox{sign}(t)\max(|t|-\lambda,0) is the soft shrinkage operator. However, a direct application of the algorithm does not yield accurate results. We adopt the procedure in Algorithm 1. The key tasks include computing the gradient JJ^{\prime} (Steps 4-5) and selecting the step size (Step 6).

Input: σ0\sigma_{0} and α\alpha
Result: an approximate minimiser δσ\delta\sigma
1 Set δσ0=0\delta\sigma_{0}=0;
2 for i \leftarrow 1, …, I do
3        Compute σi=σ0+δσi\sigma_{i}=\sigma_{0}+\delta\sigma_{i};
4        Compute the gradient J(σi)J^{\prime}(\sigma_{i});
5        Compute the H01H_{0}^{1}-gradient Js(σi)J^{\prime}_{s}(\sigma_{i});
6        Determine the step size sis_{i};
7        Update inhomogeneity by δσi+1=δσisiJs(σi)\delta\sigma_{i+1}=\delta\sigma_{i}-s_{i}J_{s}^{\prime}(\sigma_{i});
8        Threshold δσi+1\delta\sigma_{i+1} by 𝒮siα(δσi+1)\mathcal{S}_{s_{i}\alpha}(\delta\sigma_{i+1});
9        Check stopping criterion.
10 end for
Algorithm 1 Sparsity reconstruction for EIT.

Gradient evaluation Evaluating the gradient J(σ)=u(σ)p(σ)J^{\prime}(\sigma)=-\nabla u(\sigma)\cdot\nabla p(\sigma) involves solving an adjoint problem

(σp)=0,in Ω,withσpnu(σ)ϕδon Ω.-\nabla\cdot(\sigma\nabla p)=0,\quad\mbox{in }\Omega,\quad\mbox{with}\quad\sigma\frac{\partial p}{\partial n}u(\sigma)-\phi^{\delta}\quad\mbox{on }\partial\Omega.

Note that Indeed, J(σ)J^{\prime}(\sigma) is defined via duality mapping J(σ)[λ]=J(σ),λL2(Ω)J^{\prime}(\sigma)[\lambda]=\langle J^{\prime}(\sigma),\lambda\rangle_{L_{2}(\Omega)}, and thus J(σ)(L(Ω))J^{\prime}(\sigma)\in(L^{\infty}(\Omega))^{\prime} may be not smooth enough. Instead, we take the H01(Ω)H_{0}^{1}(\Omega) metric for σ\sigma, by defining Js(σ)J^{\prime}_{s}(\sigma) via J(σ)[λ]=Js(σ),λH01(Ω)J^{\prime}(\sigma)[\lambda]=\langle J_{s}^{\prime}(\sigma),\lambda\rangle_{H_{0}^{1}(\Omega)}. Integration by parts yields ΔJs(σ)+Js(σ)=J(σ)-\Delta J_{s}^{\prime}(\sigma)+J_{s}^{\prime}(\sigma)=J^{\prime}(\sigma) in Ω\Omega and Js(σ)=0J_{s}^{\prime}(\sigma)=0 on Ω\partial\Omega. The assumption is that the inclusions are in the interior of Ω\Omega. JsJ^{\prime}_{s} is also known as Sobolev gradient [84] and is a smoothed version of the L2(Ω)L^{2}(\Omega)-gradient. It metrises the set 𝒜\mathcal{A} by the H01(Ω)H_{0}^{1}(\Omega)-norm, thereby implicitly restricting the admissible conductivity to a smoother subset. Numerically, evaluating the gradient Js(σ)J_{s}^{\prime}(\sigma) involves solving a Poisson problem and can be carried out efficiently. Using JsJ^{\prime}_{s}, we can locally approximate Ψ(σ)=Ψ(σ0+δσ)\Psi(\sigma)=\Psi(\sigma_{0}+\delta\sigma) by

Ψ(σ0+δσ)Ψ(σ0+δσi)δσδσi,Js(σi)H1(Ω)+12siδσδσiH1(Ω)2+αδσ1,\Psi(\sigma_{0}+\delta\sigma)-\Psi(\sigma_{0}+\delta\sigma_{i})\sim\langle\delta\sigma-\delta\sigma_{i},J_{s}^{\prime}(\sigma_{i})\rangle_{H^{1}(\Omega)}+\tfrac{1}{2s_{i}}\|\delta\sigma-\delta\sigma_{i}\|_{H^{1}(\Omega)}^{2}+\alpha\|\delta\sigma\|_{\ell^{1}},

which is equivalent to

12siδσ(δσisiJs(σi))H1(Ω)2+αδσ1.\tfrac{1}{2s_{i}}\|\delta\sigma-(\delta\sigma_{i}-s_{i}J_{s}^{\prime}(\sigma_{i}))\|_{H^{1}(\Omega)}^{2}+\alpha\|\delta\sigma\|_{\ell^{1}}. (3)

Upon identifying δσ\delta\sigma with its expansion coefficients in {ψk}\{\psi_{k}\}, the solution to problem (3) is given by

δσi+1=𝒮siα(δσisiJs(σi)),\delta\sigma_{i+1}=\mathcal{S}_{s_{i}\alpha}(\delta\sigma_{i}-s_{i}J_{s}^{\prime}(\sigma_{i})),

This step zeros out small coefficients, thereby promoting the sparsity of δσ\delta\sigma.

Step size selection Usually, gradient-type algorithms suffer from slow convergence, e.g., steepest descent methods. One way to enhance its convergence is due to [10]. The idea is to mimic the Hessian with sIsI over the most recent steps so that sI(δσiδσi1)Js(σi)Js(σi1)sI(\delta\sigma_{i}-\delta\sigma_{i-1})\approx J_{s}^{\prime}(\sigma_{i})-J_{s}^{\prime}(\sigma_{i-1}) holds in a least-squares sense, i.e.,

si=argminss(δσiδσi1)(Js(σi)Js(σi1))H1(Ω)2.s_{i}=\arg\min_{s}\|s(\delta\sigma_{i}-\delta\sigma_{i-1})-(J_{s}^{\prime}(\sigma_{i})-J_{s}^{\prime}(\sigma_{i-1}))\|_{H^{1}(\Omega)}^{2}.

This gives rise to one popular Barzilai-Borwein rule si=δσiδσi1,Js(σi)Js(σi1)H1(Ω)/δσiδσi1,δσiδσi1H1(Ω)s_{i}=\langle\delta\sigma_{i}-\delta\sigma_{i-1},J_{s}^{\prime}(\sigma_{i})-J_{s}^{\prime}(\sigma_{i-1})\rangle_{H^{1}(\Omega)}/\langle\delta\sigma_{i}-\delta\sigma_{i-1},\delta\sigma_{i}-\delta\sigma_{i-1}\rangle_{H^{1}(\Omega)} [10, 32]. In practice, following [108], we choose the step length ss to enforce a weak monotonicity

Ψ(σ0+𝒮sα(δσisJs(σi)))maxiM+1kiΨ(σk)τs2𝒮sα(δσisJs(σi))δσiH1(Ω)2,\Psi(\sigma_{0}+\mathcal{S}_{s\alpha}(\delta\sigma_{i}-sJ_{s}^{\prime}(\sigma_{i})))\leq\max_{i-M+1\leq k\leq i}\Psi(\sigma_{k})-\tau\frac{s}{2}\|\mathcal{S}_{s\alpha}(\delta\sigma_{i}-sJ^{\prime}_{s}(\sigma_{i}))-\delta\sigma_{i}\|_{H^{1}(\Omega)}^{2},

where τ\tau is a small number, and M1M\geq 1 is an integer. One may use the step size by the above rule as the initial guess at each inner iteration and then decrease it geometrically by a factor qq until the weak monotonicity is satisfied. The iteration is stopped when sis_{i} falls below a prespecified tolerance sstops_{\mathrm{stop}} or when the maximum iteration number II is reached.

The above description follows closely the work [59], where the sparsity algorithm was first developed. There are alternative sparse reconstruction techniques, notably based on total variation [92, 39, 16, 112]. For example, [16] presented an experimental (in-vivo) evaluation of the total variation approach using a linearized model, and the resulting optimisation problem solved by the primal-dual interior point method; and the work [112] compared different optimisers. Due to the non-smoothness of the total variation, one may relax the formulation with the Modica-Mortola function in the sense of Gamma convergence [92, 61].

2.4 The D-bar method

The D-bar method of Siltanen, Mueller and Isaacson [98] is a direct reconstruction algorithm based on the uniqueness proof due to Nachman [83]; see also Novikov [87]. That is, a reconstruction is directly obtained from the DtN map Λσ,D\Lambda_{\sigma,D}, without going through an iterative process. Note that the DtN map Λσ,D\Lambda_{\sigma,D} can be computed as the inverse of the measured NtD map Λσ,N\Lambda_{\sigma,N} when full boundary data is available. Below we briefly overview the classic D-bar algorithm assuming σC2(Ω)\sigma\in C^{2}(\Omega), with a positive lower bound (i.e., σc>0\sigma\geq c>0 in Ω\Omega), and σ1\sigma\equiv 1 in a neighbourhood of the boundary Ω\partial\Omega. In this part, we consider an embedding of 2\mathbb{R}^{2} in the complex plane, and hence we will identify planar points x=(x1,x2)x=(x_{1},x_{2}) with the corresponding complex number x1+ix2x_{1}+{\rm i}x_{2}, and the product kxkx denotes complex multiplication. For more detailed discussions, we refer interested readers to the survey [82].

First, we transform the conductivity equation (1) into a Schrödinger-type equation by substituting u~=σu\widetilde{u}=\sqrt{\sigma}u and setting q=Δσ/σq=\Delta\sqrt{\sigma}/\sqrt{\sigma} and extending σ1\sigma\equiv 1 outside Ω\Omega. Then we obtain

(Δ+q(x))u~(x)=0,in 2.(-\Delta+q(x))\widetilde{u}(x)=0,\quad\mbox{in }\mathbb{R}^{2}. (4)

Next we introduce a class of special solutions of equation (4) due to Faddeev [36], the so-called complex geometrical optics (CGO) solutions ψ(x,k)\psi(x,k), depending on a complex parameter k{0}k\in\mathbb{C}\setminus\{0\} and x2x\in\mathbb{R}^{2}. These exponentially behaving functions are key to the reconstruction. Specifically, given qLp(2), 1<p<2q\in L^{p}(\mathbb{R}^{2}),\,1<p<2, the CGO solutions ψ(x,k)\psi(x,k) are defined as solutions to

(Δ+q(x))ψ(,k)=0,in 2,(-\Delta+q(x))\psi(\cdot,k)=0,\quad\mbox{in }\mathbb{R}^{2},

satisfying the asymptotic condition eikxψ(x,k)1W1,p~(2)e^{-{\rm i}kx}\psi(x,k)-1\in W^{1,\tilde{p}}(\mathbb{R}^{2}) with 2<p~<2<\tilde{p}<\infty. These solutions are unique for k{0}k\in\mathbb{C}\setminus\{0\} as shown in [83, Theorem 1.1]. Then D-bar algorithm recovers the conductivity σ\sigma from the knowledge of the CGO solutions μ(x,k)=eikxψ(x,k)\mu(x,k)=e^{-{\rm i}kx}\psi(x,k) at the limit k0k\to 0 [83, Section 3]

limk0μ(x,k)=σ,xΩ.\lim_{k\to 0}\mu(x,k)=\sqrt{\sigma},\quad x\in\Omega.

Numerically, one can substitute the limit by k=0k=0 and evaluate μ(x,0)\mu(x,0). The reconstruction of σ\sigma relies on the use of an intermediate object called non-physical scattering transform 𝐭\mathbf{t}, defined by

𝐭(k)=2ek(x)μ(x,k)q(x)dx,\mathbf{t}(k)=\int_{\mathbb{R}^{2}}e_{k}(x)\mu(x,k)q(x){\rm d}x,

with ek(x):=exp(i(kx+k¯x¯))e_{k}(x):=\exp({\rm i}(kx+\bar{k}\bar{x})), where over-bar denotes complex conjugate. Since μ\mu is asymptotically close to one, 𝐭(k)\mathbf{t}(k) is similar to the Fourier transform of q(x)q(x). Meanwhile, we can obtain μ\mu by solving the name-giving D-bar equation

¯kμ(x,k)=14πk¯𝐭(k)ek(x)μ(x,k)¯,k0,\bar{\partial}_{k}\mu(x,k)=\frac{1}{4\pi\bar{k}}\mathbf{t}(k)e_{-k}(x)\overline{\mu(x,k)},\quad k\neq 0, (5)

where ¯k=12(k1+ik2)\bar{\partial}_{k}=\frac{1}{2}(\frac{\partial}{\partial k_{1}}+{\rm i}\frac{\partial}{\partial k_{2}}) is known as the D-bar operator. To solve the above equation, scattering transform 𝐭(k)\mathbf{t}(k) is required, which we can not measure directly from the experiment, but 𝐭(k)\mathbf{t}(k) can be represented using the DtN map. Indeed, using Alessandrini’s identity [1], we get the boundary integral

𝐭(k)=Ωeik¯x¯(Λσ,DΛ1,D)ψ(x,k)ds.\mathbf{t}(k)=\int_{\partial\Omega}e^{{\rm i}\bar{k}\bar{x}}(\Lambda_{\sigma,D}-\Lambda_{1,D})\psi(x,k){\rm d}s.

Note that Λ1,D\Lambda_{1,D} can be analytically computed, and only Λσ,D\Lambda_{\sigma,D} needs to be obtained from the measurements. Here, we will employ a Born approximation using ψeikx\psi\approx e^{{\rm i}kx}, leading to the linearised approximation

𝐭exp(k)Ωeik¯x¯(Λσ,DΛ1,D)eikxds.\mathbf{t}^{\exp}(k)\approx\int_{\partial\Omega}e^{{\rm i}\bar{k}\bar{x}}(\Lambda_{\sigma,D}-\Lambda_{1,D})e^{{\rm i}kx}{\rm d}s. (6)

This linearised D-bar algorithm can be efficiently implemented. First, one computes the 𝐭exp(k)\mathbf{t}^{\exp}(k) from the measured DtN map Λσ,D\Lambda_{\sigma,D}, and then one solves the D-bar equation (5). Note that the solutions of (5) are independent for each xΩx\in\Omega and one can efficiently parallelise over xx. This leads to real-time implementations and is especially relevant for time-critical applications, e.g., monitoring purposes. The fully nonlinear D-bar algorithm would require first computing ψ\psi by solving a boundary integral equation and then computing the scattering transform 𝐭(k)\mathbf{t}(k).

The above algorithm assumes infinite precision and noise-free data. When the data is noise corrupted with finite measurements, the measured DtN map Λσ,D\Lambda_{\sigma,D} is not accurate, and then the computation of 𝐭(k)\mathbf{t}(k) becomes exponentially unstable for |k|>R|k|>R. Thus, for practical data, we need to restrict the computations to a certain frequency range so as to stably compute 𝐭(k)\mathbf{t}(k). Below we choose R=5R=5 for noise-free data and R=4.5,R=4R=4.5,\,R=4 for 1% and 5% noisy measurements, respectively. This strategy of reducing the cut-off radius for noisy measurements is shown to be a regularisation strategy [68]. The final algorithm can be summarised as outlined below in Algorithm 2.

Input: Λσ,D\Lambda_{\sigma,D} and RR
Result: Regularised reconstruction of σ\sigma
1 Compute analytic Λ1,D\Lambda_{1,D};
2 Evaluate 𝐭exp(k)\mathbf{t}^{\exp}(k) for |k|<R|k|<R by (6);
3 Solve the D-bar equation (5);
4 Obtain σ(x)=μ(x,0)2\sigma(x)=\mu(x,0)^{2} for xΩx\in\Omega;
Algorithm 2 D-bar algorithm using 𝐭exp\mathbf{t}^{\exp}

Besides the D-bar method, there are other analytic and direct reconstruction methods available, e.g., enclosure method [54], monotonicity method [47], direct sampling method [29], and Calderón’s method [11, 96]. The common advantage of these approaches is their computational efficiency, but unfortunately, also the directly inherited exponential instability to noise. While there are strategies to deal with noise, e.g., reducing the cut-off radius, the reconstruction quality does suffer: the reconstructions tend to be overall smooth. Additionally, there may be theoretical limitations to the reconstructions that can be obtained. For example, for the classic D-bar algorithm, it is C2C^{2} conductivities, and for the enclosure methods, we can only find the convex hull of all inclusions. Thus, it is very interesting to discuss how deep learning can help overcome these limitations.

3 Deep learning-based methods

The integration of deep learning techniques has significantly advanced EIT reconstruction. It has successfully addressed several challenges posed by the non-linearity and severe ill-posedness of the inverse problem, leading to improved quality and reconstruction accuracy. Researchers have achieved breakthroughs in noise reduction, edge retention, and spatial resolution, making EIT a more viable imaging modality in medical and industrial applications. This success is mainly attributed to the extraordinary approximation ability of DNNs and the use of a large amount of paired training datasets.

First, much effort has been put into designing DNNs architectures for directly learning the maps from the measured voltages UU to conductivity distributions σ\sigma, i.e., training a DNN 𝒢θ\mathcal{G}_{\theta} such that σ𝒢θ(U)\sigma\approx\mathcal{G}_{\theta}(U). Li et al. [74] proposed a four-layer DNN framework constituted of a stacked autoencoder and a logistic regression layer for EIT problems. Tan et al. [102] designed the network based on LeNet convolutional layers and refined it using pooling layers and dropout layers. Chen et al. [26] introduce a novel DNN using a fully connected layer to transform the measurement data to the image domain before a U-Net architecture, and [110] a DenseNet with multiscale convolution. Fan and Ying [37] proposed DNNs with compact architectures for the forward and inverse problems in 2D and 3D, exploiting the low-rank property of the EIT problem. Huang et al. [52] first reconstruct an initial guess using RBF networks, which is then fed into a U-Net for further refinement. [95], uses a variational autoencoder to obtain a low-dimensional representation of images, which is then mapped to a low dimension of the measured voltages as well. We refer to [109, 73, 90, 25] for more direct learning methods.

Second, combining traditional analytic-based methods and neural networks is also a popular idea. Abstractly, one employs an analytic operator \mathcal{R} and a neural network 𝒢θ\mathcal{G}_{\theta} such that σ𝒢θ((U))\sigma\approx\mathcal{G}_{\theta}(\mathcal{R}(U)). One example is the Deep D-bar method [45]. It first generates EIT images by the D-bar method, then employs the U-Net network to refine the initial images further. Along this line, one can design the input of the DNN from Calderón’s method [21, 101], domain-current method [106], one-step Gauss-Newton algorithm [79] and conjugate gradient algorithm [111]. Inspired by the mathematical relationship between the Cauchy difference index functions in the direct sampling method, Guo and Jiang [42] proposed the DDSM proposed in [42] employs the Cauchy difference functions as the DNN input. Yet another popular class of deep learning-based methods that combines model-based approaches with learned components is based on the idea of unrolling, which replaces components of a classical iterative reconstructive method with a neural network learned from paired training data (see [81] for an overview). Chen et al. [24] proposed a multiple measurement vector (MMV) model-based learning algorithm (called MMV-Net) for recovering the frequency-dependent conductivity in multi-frequency electrical impedance tomography (mfEIT). It unfolds the update steps of the alternating direction method of multipliers for the MMV problem. The authors validated the approach on the Edinburgh mfEIT Dataset and a series of comprehensive experiments. See also [23] for a mask-guided spatial–temporal graph neural network (M-STGNN) to reconstruct mfEIT images in cell culture imaging. Unrolling approaches based on the Gauss-Newton have also been proposed, where an iterative updating network is learned for the explicitly computed Gauss-Newton updates [49] or a proximal type operator [31]. Likewise, a quasi-Newton method has been proposed by learning an updated singular value decomposition [99]. One should further mention an excellent study on how to apply deep learning concepts for the particular case of EIT-lung data [95], which sets the standards in terms of integrating mathematical as well as clinical expertise into the learned reconstruction process.

Reconstruction methods in these two groups are supervised in nature and rely heavily on high-quality training data. Even though there are a few public EIT datasets, they are insufficient to train DNNs (often with many parameters). In practice, the DNN is learned on synthetic data, simulated with phantoms via, e.g., FEM. The main advantage is that once the neural network is trained, at the inference stage, the process requires only feeding through the trained neural network and thus can be done very efficiently. Generally, these approaches perform well when the test data is close to the distribution of the training data. Still, their performance may degrade significantly when the test data deviates from the setting of the training data [4]. This lack of robustness with respect to the out-of-distribution test data represents one outstanding challenge with all the above approaches.

Third, several unsupervised learning methods have been proposed for EIT reconstruction. Bar et al. [9] employ DNNs to approximate voltage functions {uj}j=1J\{u_{j}\}_{j=1}^{J} and conductivity σ\sigma and then train them together to satisfy the strong PDE conditions and the boundary conditions, following the physics-informed neural networks (PINNs) [89]. Furthermore, data-driven energy-based models are imparted onto the approach to improve the convergence rate and robustness for EIT reconstruction [88]. Bao et al. [8] exploited the weak formulation of the EIT problem, using DNNs to parameterise the solutions and test functions and adopting a minimax formulation to alternatively update the DNN parameters (to find an approximate solution of the EIT problem). Liu et al. [76] applied the deep image prior (DIP) [105], a novel DNN-based approach to regularise inverse problems, to EIT, and optimised the conductivity function by back-propagation and the finite element solver. Generally, the methods in this group are robust with respect to the distributional shift of the test data. However, each new test data requires fresh training, and hence, they tend to be computationally more expensive.

In addition, several neural operators, e.g., [77, 75, 103], have been designed to approximate mostly forward operators. The recent survey [85] discusses various extensions of these neural operators for solving inverse problems by reversed input-output and studied Tikhonov regularisation with a trained forward model.

3.1 Deep D-bar

In practice, reconstructions obtained with the D-bar method suffer from a smoothing effect due to truncation in the scattering transform, which is necessary for finite and noisy data but leaves out all high-frequency information in the data. Thus, we cannot reconstruct sharp edges, and subsequent processing is beneficial. An early approach to overcome the smoothing is to use a nonlinear diffusion process to sharpen edges [46]. In recent years, deep learning has been highly successful for post-processing insufficient noise or artefact-corrupted reconstruction [63].

In the context of the deep D-bar method, we are given an initial analytic reconstruction operator d-bar\mathcal{R}_{\text{d-bar}} that maps the measurements (i.e., the DtN map Λσ,D\Lambda_{\sigma,D} for EIT) to an initial image, which suffers from various artefacts, primarily over-smoothing. Then a U-Net 𝒢θ\mathcal{G}_{\theta} [93] is trained to improve the reconstruction quality of the initial reconstructions, and we refer to the original publication [45] for details on the architecture. Thus, we could write this process as σ𝒢θ(d-bar(Λσ,D))\sigma\approx\mathcal{G}_{\theta}(\mathcal{R}_{\text{d-bar}}(\Lambda_{\sigma,D})), where the network 𝒢θ\mathcal{G}_{\theta} is trained by minimising the 2\ell^{2}-loss of D-bar reconstructions to ground-truth images. Specifically, given a collection of NN paired training data {(σi,Λσi,Dδ)}i=1N\{(\sigma_{i}^{\dagger},\Lambda_{\sigma_{i}^{\dagger},D}^{\delta})\}_{i=1}^{N} (i.e., ground-truth conductivity σi\sigma_{i}^{\dagger} and the corresponding noisy measurement data Λσi,Dδ\Lambda_{\sigma_{i}^{\dagger},D}^{\delta}), we train a DNN 𝒢θ\mathcal{G}_{\theta} by minimising the following empirical loss

(θ)=1Ni=1Nσi𝒢θ(d-bar(Λσi,Dδ))L2(Ω)2,\mathcal{L}(\theta)=\frac{1}{N}\sum_{i=1}^{N}\|\sigma_{i}^{\dagger}-\mathcal{G}_{\theta}(\mathcal{R}_{\text{d-bar}}(\Lambda_{\sigma_{i}^{\dagger},D}^{\delta}))\|_{L^{2}(\Omega)}^{2},

This can be viewed as a specialised denoising scheme to remove the artefacts in the initial reconstruction d-bar(Λσi,Dδ)\mathcal{R}_{\text{d-bar}}(\Lambda_{\sigma_{i}^{\dagger},D}^{\delta}) by the D-bar reconstructor d-bar\mathcal{R}_{\text{d-bar}}. The loss (θ)\mathcal{L}(\theta) is then minimised with respect to the DNN parameters θ\theta, typically by the Adam algorithm [66], a very popular variant of stochastic gradient descent. Once a minimiser θ\theta^{*} of the loss (θ)\mathcal{L}(\theta) is found, given a new test measurement Λσ,Dδ\Lambda_{\sigma,D}^{\delta}, we can obtain the reconstruction 𝒢θ(d-bar(Λσ,Dδ))\mathcal{G}_{\theta^{*}}(\mathcal{R}_{\text{d-bar}}(\Lambda_{\sigma,D}^{\delta})). Thus at the testing stage, the method requires only additional feeding of the initial reconstruction d-bar(Λσ,Dδ)\mathcal{R}_{\text{d-bar}}(\Lambda_{\sigma,D}^{\delta}) through the network 𝒢θ\mathcal{G}_{\theta^{*}}, which is computationally very efficient. This presents one distinct advantage of a supervisedly learned map.

Several extensions have been proposed. Firstly, the need to model boundary shapes in the training data can be eliminated by using the Beltrami approach [7] instead of the classic D-bar method. This allows for domain-independent training [44]. A similar motivation is given by replacing the classic U-net that operates on rectangular pixel domains with a graph convolutional version; this way learned filters are domain and shape-independent [49, 48]. Similarly, the reconstruction from Calderón’s method [11, 96] can be post-processed using U-net, leading to the deep Calderón’s method [21]. Distinctly, the deep Calderón’s method is capable of directly recovering complex valued conductivity distributions. Finally, even the enclosure method can be improved by predicting the convex hull from values of the involved indicator function [97].

3.2 Deep direct sampling method

The deep sampling method (DDSM) [42] is based on the direct sampling method (DSM) due to Chow, Ito and Zou [29]. Using only one single Cauchy data pair on the boundary Ω\partial\Omega, The DSM constructs a family of probing functions {ηx,dx}xΩ,dxnH2γ(Ω)\{\eta_{x,d_{x}}\}_{x\in\Omega,d_{x}\in\mathbb{R}^{n}}\subset H^{2\gamma}(\partial\Omega) such that the index function defined by

(x,dx):=ηx,dx,uuσ0γ,Ωuuσ0L2(Ω)|ηx,dx|Y,xΩ,\mathcal{I}(x,d_{x}):=\frac{\langle\eta_{x,d_{x}},u-u_{\sigma_{0}}\rangle_{\gamma,\partial\Omega}}{\|u-u_{\sigma_{0}}\|_{L^{2}(\partial\Omega)}|\eta_{x,d_{x}}|_{Y}},\quad x\in\Omega, (7)

takes large values for points near the inclusions and relatively small values for points far away from the inclusions, where ||Y|\cdot|_{Y} denotes the H2γ(Ω)H^{2\gamma}(\partial\Omega) seminorm in and the duality product f,gγ,Ω\langle f,g\rangle_{\gamma,\partial\Omega} is defined by

f,gγ,Ω=Ω(ΔΩ)γfgds=(ΔΩ)γf,gL2(Ω),\langle f,g\rangle_{\gamma,\partial\Omega}=\int_{\partial\Omega}(-\Delta_{\partial\Omega})^{\gamma}fg{\rm d}s=\langle(-\Delta_{\partial\Omega})^{\gamma}f,g\rangle_{L^{2}(\partial\Omega)}, (8)

where ΔΩ-\Delta_{\partial\Omega} denotes the Laplace-Beltrami operator, and (ΔΩ)γ(-\Delta_{\partial\Omega})^{\gamma} its fractional power via spectral calculus. Let the Cauchy difference function φ\varphi be defined by

Δφ=0inΩ,φn=(ΔΩ)γ(uσuσ0)onΩ,Ωφds=0.-\Delta\varphi=0\quad\text{in}\quad\Omega,\quad\frac{\partial\varphi}{\partial n}=(-\Delta_{\partial\Omega})^{\gamma}(u_{\sigma}-u_{\sigma_{0}})\quad\text{on}\quad\partial\Omega,\quad\int_{\partial\Omega}\varphi{\rm d}s=0. (9)

Then the index function (x,dx)\mathcal{I}(x,d_{x}) can be equivalently rewritten as

(x,dx):=dxφ(x)uσuσ0L2(Ω)|ηx,dx|Y,xΩ,\mathcal{I}(x,d_{x}):=\frac{d_{x}\cdot\nabla\varphi(x)}{\|u_{\sigma}-u_{\sigma_{0}}\|_{L^{2}(\partial\Omega)}|\eta_{x,d_{x}}|_{Y}},\quad x\in\Omega, (10)

Motivated by the relation between the index function (x,dx)\mathcal{I}(x,d_{x}) and the Cauchy difference function φ\varphi and to fully make use of multiple pairs of measured Cauchy data, Guo and Jiang [42] proposed the DDSM, employing DNNs to learn the relationship between the Cauchy difference functions φ\varphi and the true inclusion distribution. That is, DSSM construct and train a DNN 𝒢θ\mathcal{G}_{\theta} such that

σ𝒢θ(φ1,φ2,,φN),\sigma\approx\mathcal{G}_{\theta}(\varphi_{1},\varphi_{2},...,\varphi_{N}), (11)

where {φi}i=1N\{\varphi_{i}\}_{i=1}^{N} correspond to NN pairs of Cauchy data {g,Λσ,Ng}=1N\{g_{\ell},\Lambda_{\sigma,N}g_{\ell}\}_{\ell=1}^{N}. Guo and Jiang [42] employed a CNN-based U-Net network for DDSM, and later [41] designed a U-integral transformer architecture (including comparison with state-of-the-art DNN architectures, e.g., Fourier neural operator, and U-Net). In our numerical experiments, we choose the U-Net as the network architecture for DDSM as we observe that U-Net can achieve better results than the U-integral transformer for resolution 64×6464\times 64. For higher resolution cases, the U-integral transformer seems to be a better choice due to its more robust ability to capture long-distance information. The following result [42, Theorem 4.1] provides some mathematical foundation of DDSM.

Theorem 3.1.

Let {g}=1\{g_{\ell}\}_{\ell=1}^{\infty} be a fixed orthonormal basis of H1/2(Ω)H^{-1/2}(\partial\Omega). Given an arbitrary σ\sigma such that σ>σ0\sigma>\sigma_{0} or σ<σ0\sigma<\sigma_{0}, let {g,Λσ,Ng}=1\{g_{\ell},\Lambda_{\sigma,N}g_{\ell}\}_{\ell=1}^{\infty} be the Cauchy data pairs and let {φ}=1\{\varphi_{\ell}\}_{\ell=1}^{\infty} be the corresponding Cauchy difference functions with γ=σ0\gamma=\sigma_{0}. Then the inclusion distribution σ\sigma can be purely determined from {φ}=1.\{\varphi_{\ell}\}_{\ell=1}^{\infty}.

The idea of DDSM was extended to diffusive optical tomography in [43]. Ning et al. [86] employ the index functions obtained from the DSM as the input of the DNN for solving inverse obstacle scattering problems.

3.3 CNN based on LeNet

Li et al. [74] proposed using CNN to directly learn the map from the measured data and the conductivity distribution. The employed network architecture is based on LeNet and refined by applying dropout layer and moving average. The CNN architecture used in the numerical experiments below is shown in Fig. 1. Since the number of injected currents and the discretisation size differ from that in [74], we modify the input size, network depth, kernel size, etc. The input size is 32 × 64. The kernel size is 5 × 5 with zero-padding max pooling rather than average pooling is adopted to gain better performance. The sigmoid activation function used in LeNet causes a serious saturation phenomenon, which can lead to vanishing gradients. So, ReLU is chosen as the activation function below. A dropout layer is added to improve the generalisation ability of this model. One-half of the neurons before the first fully connected layers are randomly discarded from the network during the training process. It can reduce the complex co-adaptation among neurons so that the network can learn more robust features. In addition, a dropout layer has been proven to be very effective in training large datasets.

Refer to caption
Figure 1: The architecture of CNN-based on LeNet.

3.4 FC-UNet

Chen et al. [26] proposed a novel deep learning architecture by adding a fully connected layer before the U-Net structure. The input of the network is given by the difference voltage uσδuσ0u_{\sigma}^{\delta}-u_{\sigma_{0}}. Inspired by a linearized approximation of the EIT problem for a small perturbation of conductivity distribution σσ0\sigma-\sigma_{0}:

uσδuσ0𝐉(σσ0),u_{\sigma}^{\delta}-u_{\sigma_{0}}\approx\mathbf{J}(\sigma-\sigma_{0}), (12)

where 𝐉\mathbf{J} donates the sensitivity matrix, the method first generates an initial guess of the conductivity distribution σ\sigma from the linear fully connected(FC) layer followed by a ReLU layer and then feeds it to a denoising U-Net model to learn the nonlinear relationship further. Thus we could write this process as σ𝒢θ(uσδuσ0)=𝒢θ2(𝒢θ1(uσδuσ0))\sigma\approx\mathcal{G}_{\theta}(u_{\sigma}^{\delta}-u_{\sigma_{0}})=\mathcal{G}_{\theta_{2}}(\mathcal{G}_{\theta_{1}}(u_{\sigma}^{\delta}-u_{\sigma_{0}})) with 𝒢θ1=FC+ReLU\mathcal{G}_{\theta_{1}}=\text{FC+ReLU} and 𝒢θ2=U-Net\mathcal{G}_{\theta_{2}}=\text{U-Net}. The authors also proposed an initialisation strategy to further help obtain the initial guess, i.e., the weights θ1\theta_{1} of the fully connected layer are initialised with the least-squares solution using training data. The weights θ2\theta_{2} for the U-Net are initialised randomly as usual. Then, all weights θ=θ1θ2\theta=\theta_{1}\cup\theta_{2} are updated during the training process. According to the numerical results shown in [26], this special weight initialization strategy can reduce the training time and improve the reconstruction quality. With a trained network, different from the deep D-bar and DDSM methods, the methods FC-UNet and CNN based on LeNet only involve a forward pass of the trained network for each testing example.

Based on our numerical experience, dropping the ReLU layer following the fully connected layer can provide better reconstruction results, at least for the examples in section 4. Thus, for the numerical experiments, we employ the FC-UNet network as shown in Fig. 2, in which only a linear fully connected layer is employed before the U-Net.

In addition, by employing the FC-UNet to extract structure distribution and a standard CNN to extract conductivity values, a structure-aware dual-branch network was designed in [25] to solve EIT problems.

Refer to caption
Figure 2: The architecture of FC-UNet.

4 Numerical experiments and results

The core of this work is the extensive numerical experiments. Now, we describe how to generate the dataset used in the experiments, highlighting its peculiarity and relevance in real-world scenarios, and also the performance metrics used for comparing different methods. Last, we present and discuss the experimental results.

4.1 Dataset generation and characteristics

Generating simulated data consists of three main parts, which we describe below. The codes for data generation are available at https://github.com/dericknganyu/EIT_dataset_generation.

In the 2D setting, we generate NN circular phantoms {Pi_i=1N\{P_{i}\_{i=1}^{N}, all restricted to the unit circle centred at the origin, i.e., Ω={(x,y):x2+y21}\Omega=\{(x,y):x^{2}+y^{2}\leq 1\} in the Cartesian coordinates or {(r,θ):r1,θ[0,2π]}\{(r,\theta):r\leq 1,\theta\in[0,2\pi]\} in polar coordinates. The phantoms are generated randomly. Firstly, we decide on the maximum number MM\in\mathbb{N} of inclusions. Each phantom would then contain nn inclusions, where n𝒰{1,,M}n\in\mathcal{U}\{1,\ldots,M\}, the uniform distribution over the set [1,,M][1,\ldots,M]. To mimic realistic scenarios in medical imaging, the inclusions are elliptical and are sampled such that when n>1n>1, the inclusions do not overlap. Since the inclusions are elliptical, each inclusion, (Ej)j=1n(E_{j})_{j=1}^{n} is characterised by a centre Cj=(hj,kj)C_{j}=(h_{j},k_{j}), an angle of rotation αj\alpha_{j}, a major and minor axis aja_{j} and bjb_{j} respectively. The parametric equation of an ellipse EjE_{j} is thus given by

Ej={(x,y):(xy)=(hj+ajcosθcosαjbjsinθsinαjkj+ajcosθsinαj+bjsinθcosαj),θ[0,2π]}.E_{j}=\left\{(x,y):\begin{pmatrix}x\\ y\end{pmatrix}=\begin{pmatrix}h_{j}+a_{j}\cos{\theta}\cos{\alpha_{j}}-b_{j}\sin{\theta}\sin{\alpha_{j}}\\ k_{j}+a_{j}\cos{\theta}\sin{\alpha_{j}}+b_{j}\sin{\theta}\cos{\alpha_{j}}\end{pmatrix},\theta\in[0,2\pi]\right\}. (13)

To mimic realistic scenarios in medical imaging, the inclusions are sampled to avoid contact with the boundary Ω\partial\Omega of the domain Ω\Omega. For an inclusion EjE_{j}, we have x2+y2<0.9x^{2}+y^{2}<0.9 for any (x,y)Ej(x,y)\in E_{j}. In this way, all phantoms have inclusions contained within Ω\Omega. We illustrate this in Algorithm 3.

Each phantom Pi,i{1,2,,N},P_{i},i\in\{1,2,\ldots,N\}, has (Ej)j=1n(E_{j})_{j=1}^{n} inclusions, with n𝒰{1,,M}n\in\mathcal{U}\{1,\ldots,M\}. For each EjE_{j}, we assign a conductivity σjiΣj:=𝒰(0.2,0.8)𝒰(1.2,2.0)\sigma^{i}_{j}\in\Sigma_{j}:=\mathcal{U}(0.2,0.8)\cup\mathcal{U}(1.2,2.0). The background conductivity is set to 11. In this way, given a point (x,y)Pi(x,y)\in P_{i} in the domain/phantom, the conductivity σi(x,y)\sigma_{i}(x,y) at that point is therefore given by

σi(x,y)={σjiΣj,if(x,y)Ej,j=1,,n1,otherwise.\sigma_{i}(x,y)=\begin{cases}\sigma^{i}_{j}\in\Sigma_{j},&\text{if}\ (x,y)\in E_{j},~{}j=1,\ldots,n\\ 1,&\text{otherwise.}\end{cases} (14)

Fig. 3(b) shows an example of a phantom generated in this way.

Next, for any simulated σ\sigma, we solve the forward problem (1) using the Galerkin finite element method (FEM) [71, 39], for the injected currents g1g_{1} and g2g_{2} in (15) around the boundary Ω\partial\Omega. The points (x,y)Pi(x,y)\in P_{i} are thus nodes in the finite element mesh shown in Fig. 3(a)

g1\displaystyle g_{1} =π1/2sin(nθ)andg2\displaystyle=\pi^{-1/2}\sin(n\theta)\quad\mbox{and}\quad g_{2} =π1/2cos(nθ),n=1,2,,16\displaystyle=\pi^{-1/2}\cos(n\theta),\quad n=1,2,\ldots,16 (15)

We use the MATLAB PDE toolbox in the numerical experiment to solve the forward problem.

Refer to caption
(a) Forward solver FEM mesh.
Refer to caption
(b) Constant σji|j=13\sigma^{i}_{j}\big{|}^{3}_{j=1} inclusions.
Refer to caption
(c) Textured σji|j=13\sigma^{i}_{j}\big{|}^{3}_{j=1} inclusions.
Figure 3: Illustrating of Phantom characteristics used in simulated data.
Input:
  \bullet nodes (x,y),{1,2,,L}(x_{\ell},y_{\ell}),\ell\in\{1,2,\ldots,L\} from FEM mesh
  \bullet NN\in\mathbb{N}, number of phantoms
  \bullet MM\in\mathbb{N}, maximum number of inclusions
Result:
  \bullet Phantoms PiP_{i}, with conductivity σi,i{1,2,,n}\sigma_{i},i\in\{1,2,\ldots,n\}
1 select n𝒰{1,M}n\in\mathcal{U}\{1,M\};
2 for i \leftarrow 1, …, N do
3        for j \leftarrow 1, …, n do
               /* Sample inclusions and conductivity */
4               Sample EjE_{j}, non-overlapping ellipses based on (13), within the circle of radius 0.90.9;
5               Sample σji𝒰(0.2,0.8)𝒰(1.2,2.0)\sigma^{i}_{j}\in\mathcal{U}(0.2,0.8)\cup\mathcal{U}(1.2,2.0);
6              
7        end for
8       for \ell\leftarrow 1, …, L do
               /* Evaluate conductivity on mesh nodes */
9               Evaluate σi(x,y)\sigma_{i}(x_{\ell},y_{\ell}) based on (14) ;
10              
11        end for
12       
13 end for
Algorithm 3 Procedure for generating phantoms

In real-life situations, the conductivities of the inclusions are rarely constant. Indeed, usually, there are textures on internal organs in medical applications. Motivated by this, we take a step further in generating phantoms, with inclusions having variable conductivities. This introduces a novel challenge to the EIT problem, and we seek to study its impact on different reconstruction algorithms. The procedure to generate simulated data remains unchanged. However, σji\sigma^{i}_{j} in equation (14) becomes

σji=sfRαj,Cj,\sigma^{i}_{j}=s\circ f\circ R_{\alpha_{j},C_{j}},

where f:2(x,y)12(sinkxx+sinkyy)[1,1]f:\mathbb{R}^{2}\ni(x,y)\mapsto\tfrac{1}{2}\left(\sin{k_{x}x}+\sin{k_{y}y}\right)\in[-1,1], Rαj,CjR_{\alpha_{j},C_{j}} is the rotation of centre Cj=(hj,kj)C_{j}=(h_{j},k_{j}) and angle αj\alpha_{j}, with respect to the centre and angle of the ellipse EjE_{j} respectively; and ss applies a scaling so that the resulting σji\sigma^{i}_{j} is either within the range [0.2,0.8][0.2,0.8] or [1.2,2.0][1.2,2.0]. Fig. 3(c) shows an example phantom.

We also study the performance of the methods in noisy scenarios, i.e. reconstructing the conductivity from noisy measurements. The resulting solution to the forward problem uu, on the boundary Ω\partial\Omega, is then perturbed with normally distributed random noise of different levels δ\delta:

uδ(x)=u(x)+δ|u(x)|ξ(x),xΩ,u^{\delta}(x)=u(x)+\delta\cdot|u(x)|\cdot\xi(x),\quad x\in\partial\Omega,

where ξ(x)\xi(x) follows the standard normal distribution 𝒩(0,1)\mathcal{N}(0,1).

For the deep learning methods, we employ 20,000 training data and 100 validation data without noises added. Then we compare the results for 100 testing data with different noise levels.

We employ several performance metrics commonly used in the literature to compare different reconstruction methods comprehensively. Table 1 outlines these metrics with their mathematical expressions and specifications. In Table 1, 𝝈\boldsymbol{\sigma} denotes the ground truth with mean μ𝝈\mu_{\boldsymbol{\sigma}} and variance s𝝈2s^{2}_{\boldsymbol{\sigma}}, while 𝝈^\boldsymbol{\hat{\sigma}} the predicted conductivity with mean μ𝝈^\mu_{\boldsymbol{\hat{\sigma}}} and variance s𝝈^2s^{2}_{\boldsymbol{\hat{\sigma}}}. σ^i\hat{\sigma}_{i} is the ii-th element of 𝝈^\boldsymbol{\hat{\sigma}} while σi\sigma_{i} is the ii-th element of 𝝈\boldsymbol{\sigma}. N{N} is the total number of pixels, so that 𝝈=(σi)i=1N\boldsymbol{\sigma}=(\sigma_{i})^{N}_{i=1} and 𝝈^=(σ^i)i=1N\boldsymbol{\hat{\sigma}}=(\hat{\sigma}_{i})^{N}_{i=1}.

Error Metric Mathematical Expression Highlights
Relative Image Error (RIE) 𝝈^𝝈𝝈=i=1N|σ^iσ|i=1N|σi|\dfrac{\|\hat{\boldsymbol{\sigma}}-\boldsymbol{\sigma}\|}{\|\boldsymbol{\sigma}\|}=\dfrac{\sum\limits_{i=1}^{N}\left|\hat{\sigma}_{i}-\sigma\right|}{\sum\limits_{i=1}^{N}\left|\sigma_{i}\right|} Evaluates the relative error between the true value and prediction [26].
Image Correlation Coefficient (ICC) i=1N(σ^iμ𝝈^)(σiμ𝝈)i=1N(σ^iμ𝝈^)2i=1N(σiμ𝝈)2\dfrac{\sum\limits_{i=1}^{N}\left(\hat{\sigma}_{i}-\mu_{\hat{\boldsymbol{\sigma}}}\right)\left(\sigma_{i}-\mu_{\boldsymbol{\sigma}}\right)}{\sqrt{\sum\limits_{i=1}^{N}\left(\hat{\sigma}_{i}-\mu_{\hat{\boldsymbol{\sigma}}}\right)^{2}}\sqrt{\sum\limits_{i=1}^{N}\left(\sigma_{i}-\mu_{\boldsymbol{\sigma}}\right)^{2}}} Measures the similarity between the true value and prediction[26, 110].
Dice Coefficient (DC) 2|XY||X|+|Y|\dfrac{2|X\cap Y|}{|X|+|Y|} Tests the accuracy of the results. It provides a ratio of pixels correctly predicted to the total number of pixels—the closer to 1, the better [41]. For our experiments, we round the pixel values to 2 decimal places before evaluation.
Relative L2L^{2} Error (RLE) 𝝈^𝝈2𝝈2=(i=1N|σ^iσ|2)1/2(i=1N|σi|2)1/2\dfrac{\|\hat{\boldsymbol{\sigma}}-\boldsymbol{\sigma}\|_{2}}{\|\boldsymbol{\sigma}\|_{2}}=\dfrac{\left(\sum\limits_{i=1}^{N}\left|\hat{\sigma}_{i}-\sigma\right|^{2}\right)^{1/2}}{\left(\sum\limits_{i=1}^{N}\left|\sigma_{i}\right|^{2}\right)^{1/2}} Measures the relative difference between the truth and the prediction. The closer to 0, the better. [41, 110]
Root Mean Squared Error (RMSE) 1Ni=1N(σiσ^i)2\sqrt{\dfrac{1}{N}\sum\limits_{i=1}^{N}\left(\sigma_{i}-\hat{\sigma}_{i}\right)^{2}} Evaluates the average magnitude of the differences between the truth and the prediction. [110]
Mean Absolute Error (MAE) 1Ni=1N|σiσ^i|\dfrac{1}{N}\sum\limits_{i=1}^{N}|\sigma_{i}-\hat{\sigma}_{i}| Evaluates the average magnitude of the differences between the truth and the prediction[110]
Table 1: Description of various performance metrics.

4.2 Results and discussions

Tables 2 and 3 present quantitative values for the performance metrics of various EIT reconstruction methods, in the presence of different noise levels, δ=0%\delta=0\%, δ=1%\delta=1\% and, δ=5%\delta=5\%. The considered performance metrics are described in Table 1. Understanding the results requires considering the behaviour of these metrics: For RIE, RMSE, MAE, and RLE, lower values indicate better performance and the objective is to minimise them; for DC and ICC, values closer to 1 indicate better performance, and the goal is to maximise them. Below, we examine the results in each table more closely.

RIE ICC DC RMSE MAE RLE
Sparsity 0.038440.03844 0.021590.02159 0.791340.79134 0.099890.09989 0.103600.10360 0.039040.03904
D-bar 0.097840.09784 0.014860.01486 0.085810.08581 0.155150.15515 0.161230.16123 0.099280.09928
\hdashlineDeep D-bar 0.036770.03677 0.026270.02627 0.451210.45121 0.099570.09957 0.102690.10269 0.037210.03721
DDSM 0.034500.03450 0.026900.02690 0.485900.48590 0.087930.08793 0.090750.09075 0.034940.03494
FC-UNet 0.018630.01863 0.029540.02954 0.760040.76004 0.064050.06405 0.066150.06615 0.018900.01890
CNN LeNet 0.049510.04951 0.025090.02509 0.181290.18129 0.085790.08579 0.088560.08856 0.050110.05011
(a) δ=0%\delta=0\%
RIE ICC DC RMSE MAE RLE
Sparsity 0.038350.03835 0.021620.02162 0.792630.79263 0.099820.09982 0.103530.10353 0.038940.03894
D-bar 0.087560.08756 0.014290.01429 0.081250.08125 0.142540.14254 0.147980.14798 0.088890.08889
\hdashlineDeep D-bar 0.027380.02738 0.027620.02762 0.752640.75264 0.084770.08477 0.087510.08751 0.027740.02774
DDSM 0.035810.03581 0.027050.02705 0.465110.46511 0.090470.09047 0.093420.09342 0.036300.03630
FC-UNet 0.021590.02159 0.029290.02929 0.729740.72974 0.071700.07170 0.074090.07409 0.021940.02194
CNN LeNet 0.059050.05905 0.024990.02499 0.151980.15198 0.098840.09884 0.102150.10215 0.059880.05988
(b) δ=1%\delta=1\%
RIE ICC DC RMSE MAE RLE
Sparsity 0.039520.03952 0.021590.02159 0.787290.78729 0.102720.10272 0.106580.10658 0.040150.04015
D-bar 0.085850.08585 0.013490.01349 0.068410.06841 0.138700.13870 0.143770.14377 0.087130.08713
\hdashlineDeep D-bar 0.055630.05563 0.022720.02272 0.517110.51711 0.135230.13523 0.139540.13954 0.056480.05648
DDSM 0.048330.04833 0.024120.02412 0.382920.38292 0.113100.11310 0.117040.11704 0.049170.04917
FC-UNet 0.043320.04332 0.026720.02672 0.282930.28293 0.115190.11519 0.119230.11923 0.044150.04415
CNN LeNet 0.139010.13901 0.023120.02312 0.045680.04568 0.211380.21138 0.218760.21876 0.141550.14155
(c) δ=5%\delta=5\%
Table 2: The performance of various methods trained and tested with piece-wise constant data at different noise levels. The neural networks used were trained with noiseless measurements for the deep learning-based methods.
Refer to caption
(a) Sample 1.
Refer to caption
(b) Sample 2.
Figure 4: Effects of noise on two piecewise constant samples by various reconstruction methods.

4.2.1 Piece-wise constant conductivities

In the noiseless scenario as depicted in Table 2(a), FC-UNet shows the best performance across all metrics, with notably low RIE, RMSE, MAE, and RLE. It also achieves a high DC and ICC, indicating robustness and accuracy in image reconstruction. The DDSM also performs well, particularly regarding RIE, RMSE, MAE, and RLE. The Deep D-bar method exhibits competitive results, although slightly inferior to FC-UNet. Both Sparsity and D-bar methods show weaker performance compared to the deep learning-based methods. The CNN-LeNet method generally has the worst performance metrics, indicating less accurate image reconstruction.

Under increased noise of δ=1%\delta=1\%, the relative performance of the methods remains consistent, with FC-UNet still demonstrating strong performance. Also, the Deep D-bar performs exceptionally well in this case, particularly in terms of RIE, RMSE, MAE, and RLE. The DDSM also exhibits robust performance under this noise level, while the CNN LeNet method continues to have the highest values for most metrics, indicating challenges in handling noise. In contrast, the analytic-based methods of Sparsity and D-bar show particular robustness to the added noise, evidenced by the unnoticeable change in the performance metrics.

At a higher noise level δ=5%\delta=5\%, the inverse problem becomes more challenging due to the severe ill-posed nature; and in the learned context, since the neural networks are trained on noiseless data, which differ markedly from the noisy data, the setting may be viewed as an out-of-distribution robustness test. Here, the sparsity method comes on top across most metrics, having almost maintained constant performance. However, the FC-UNet continues to maintain the best performance in terms of ICC, emphasising its robustness in noisy conditions. Deep D-bar and DDSM display competitive results, indicating resilience to increased noise. The D-bar methods exhibit slightly weaker performance, especially in terms of RIE, RMSE, and MAE. In contrast, the CNN LeNet method continues to have the highest values for most metrics, suggesting difficulty in coping with substantial noise.

Overall, these results illustrate the varying performance of different EIT methods under different noise levels. The deep learning-based methods, particularly FC-UNet, exhibit good performance across low noise levels. In contrast, the sparsity method shows proof of consistent robustness across higher noise levels, indicating their effectiveness in reconstructing EIT images, even in the presence of noise. Visual results across all the noise levels are shown for two test samples in Figure 4.

4.2.2 Textured inclusions scenario

In the noiseless scenario depicted in Table 3(a), The best-performing method based on RIE, ICC, RMSE, MAE, and RLE is FC-UNet, with the best values across these metrics. The sparsity method and DDSM also perform well, being the first runners-up in these metrics, particularly for DC; the sparsity method achieves the highest values, indicating good performance, with DDSM as the first runner-up. The worst-performing method across all metrics in this scenario is ”D-bar.”

RIE ICC DC RMSE MAE RLE
Sparsity 0.038690.03869 0.019680.01968 0.796950.79695 0.104730.10473 0.108640.10864 0.039490.03949
D-bar 0.088560.08856 0.014730.01473 0.099560.09956 0.145970.14597 0.152570.15257 0.090630.09063
\hdashlineDeep D-bar 0.036770.03677 0.026270.02627 0.451210.45121 0.099570.09957 0.102690.10269 0.037210.03721
DDSM 0.035590.03559 0.023600.02360 0.428120.42812 0.089300.08930 0.092820.09282 0.036410.03641
FC-UNet 0.027810.02781 0.026390.02639 0.454640.45464 0.073790.07379 0.076790.07679 0.028500.02850
CNN LeNet 0.049300.04930 0.023080.02308 0.187490.18749 0.090100.09010 0.093570.09357 0.050340.05034
(a) δ=0%\delta=0\%
RIE ICC DC RMSE MAE RLE
Sparsity 0.038710.03871 0.019610.01961 0.794970.79497 0.104550.10455 0.108460.10846 0.039510.03951
D-bar 0.079820.07982 0.013810.01381 0.092940.09294 0.136340.13634 0.142310.14231 0.081680.08168
\hdashlineDeep D-bar 0.027380.02738 0.027620.02762 0.752640.75264 0.084770.08477 0.087510.08751 0.027740.02774
DDSM 0.036630.03663 0.023250.02325 0.438030.43803 0.092610.09261 0.096260.09626 0.037500.03750
FC-UNet 0.029800.02980 0.025920.02592 0.423550.42355 0.079570.07957 0.082800.08280 0.030550.03055
CNN LeNet 0.063010.06301 0.022530.02253 0.125310.12531 0.107090.10709 0.111120.11112 0.064260.06426
(b) δ=1%\delta=1\%
RIE ICC DC RMSE MAE RLE
Sparsity 0.039750.03975 0.019610.01961 0.791570.79157 0.107660.10766 0.111770.11177 0.040610.04061
D-bar 0.080150.08015 0.012650.01265 0.075030.07503 0.135900.13590 0.141610.14161 0.081930.08193
\hdashlineDeep D-bar 0.055630.05563 0.022720.02272 0.517110.51711 0.135230.13523 0.139540.13954 0.056480.05648
DDSM 0.047750.04775 0.020910.02091 0.384450.38445 0.114510.11451 0.118830.11883 0.048820.04882
FC-UNet 0.051950.05195 0.022690.02269 0.285140.28514 0.125280.12528 0.129990.12999 0.053120.05312
CNN LeNet 0.173010.17301 0.019070.01907 0.036540.03654 0.255570.25557 0.264810.26481 0.176370.17637
(c) δ=5%\delta=5\%
Table 3: The performance of various methods trained and tested with textured data at different noise levels. The neural networks used were trained with noiseless measurements for the deep learning-based methods.

With a bit of noise of 1%1\% added, the Deep D-bar surprisingly stands out as the best-performing for most of the considered metrics. The FC-UNet closely follows it. The sparsity-based method continues to lead in DC. Like the noiseless scenario, D-bar remains one of the less effective methods across all metrics. This is depicted in Table 3(b).

For higher noise levels in Table 3(c), the sparsity-based methods once again excel in all metrics but for the ICC, making it the best-performing method. The DDSM and FC-UNet closely follow in most of these metrics, while the Deep D-bar continues to perform best in ICC. The CNN LeNet consistently performs the poorest across all metrics and noise levels, especially in this high-noise scenario.

In summary, the best-performing method varies depending on the specific performance metric and noise level. Sparsity consistently demonstrates robust performance in both noiseless and noisy scenarios, while the D-bar is generally less effective. However, in terms of computational expense, the sparsity method is more expensive. The Deep D-bar, FC-UNet, and DDSM often serve as strong contenders, shifting their rankings across noise scenarios and metrics. Meanwhile, CNN LeNet consistently performs the poorest, particularly in high-noise scenarios (σ=5%\sigma=5\%). Figure 5 depicts this for two test examples.

Refer to caption
(a) Sample 1.
Refer to caption
(b) Sample 2.
Figure 5: Effects of noise on two textured samples by various reconstruction methods.

Furthermore, for both piecewise constant and textured phantoms, the sparsity-based method consistently performed well for noisy scenarios. This consistently good performance of the sparsity concept in detecting and locating inclusions even for higher noise levels is most remarkable. The error metrics are almost constant over noise levels up to 5%5\%. Hence, as a side result, we did check the limits of the sparsity concept for very high noise levels, which not surprisingly showed a sharp decrease in the reconstruction accuracy for very high noise levels. We show this in Figure 6, once again for the two piecewise constant samples initially displayed in Figure 4. The respective performances, all metrics considered, for these two samples are equally shown in Figure 7 (ICC is not plotted for the sake of visibility since its values are smallest). Figures 8 and 9 show the corresponding plots for the textured samples initially displayed in Figure 5.

Refer to caption
(a) Sample 1.
Refer to caption
(b) Sample 2.
Figure 6: Effects of additional noise on two piecewise samples by the sparsity method.
Refer to caption
(a) Sample 1.
Refer to caption
(b) Sample 2.
Figure 7: Performance variation with noise for two piecewise samples by the sparsity method.
Refer to caption
(a) Sample 1.
Refer to caption
(b) Sample 2.
Figure 8: Effects of additional noise on two textured samples by the sparsity method.
Refer to caption
(a) Sample 1.
Refer to caption
(b) Sample 2.
Figure 9: Performance variation with noise for two textured samples by the sparsity method.

5 Conclusion and future directions

In summary, this review has comprehensively examined numerical methods for addressing the EIT inverse problem. EIT, a versatile imaging technique with applications in various fields, presents a highly challenging task of reconstructing internal conductivity distributions from boundary measurements. We explored the interplay between modern deep learning-based approaches and traditional analytic methods for solving the EIT inverse problem. Four advanced deep learning algorithms were rigorously assessed, including the deep D-bar method, deep direct sampling method, fully connected U-net, and convolutional neural networks. Additionally, two analytic-based methods, incorporating mathematical formulations and regularisation techniques, were examined regarding their efficacy and limitations. Our evaluation involved a comprehensive array of numerical experiments encompassing diverse scenarios that mimic real-world complexities. Multiple performance metrics were employed to shed insights into the methods’ capabilities to capture essential features and delineate complex conductivity patterns.

The first evaluation was based on piecewise constant conductivities. The clear winners of this series of tests are the analytic sparsity-based reconstruction and the learned FC-UNet. Both perform best, with slight variations depending on the noise level. This is not surprising for learned methods, which adapt well to this particular set of test data. However, the excellent performance of sparsity methods, which can identify and locate piecewise constant inclusions correctly, is most remarkable.

A noteworthy aspect of this study was the introduction of variable conductivity scenarios, mimicking textured inclusions and departing from uniform conductivity assumptions. This enabled us to assess how each method responds to varying conductivity, shedding light on their robustness and adaptability. Here, the D-bar with learned post-processing achieves competitive results. The winning algorithm alternates between sparsity, Deep D-bar and FC-UNet. The good performance of the sparsity concepts is somewhat surprising for these textured test samples. However, none of the proposed methods was able to reconstruct the textures reliably for higher noise levels. That is, the quality of the reconstruction was mainly measured in terms of how well the inclusions were located - which gives a particular advantage to sparsity concepts.

These results naturally raise questions about the numerical results presented in several existing EIT studies, where learned methods were only compared with sub-optimal analytic methods. Our findings clearly indicate that at least within the restricted scope of the present study, optimised analytical methods can reach a comparable or even superior accuracy. Of course, one should note that after training, learned methods are much more efficient and provide a preferred option for real-time imaging.

In conclusion, this review contributes to a deeper understanding of the available solutions for the EIT inverse problem, highlighting the role of deep learning and analytic-based methods in advancing the field.

Acknowledgements

D.N.T. acknowledges the financial support of this research work within the Research Unit 3022 ”Ultrasonic Monitoring of Fiber Metal Laminates Using Integrated Sensors” by the German Research Foundation (Deutsche Forschungsgemeinschaft (DFG)) under grant number LO1436/12-1 and project number 418311604.

J.N. acknowledges the financial support from the program of China Scholarships Council (No. 202006270155).

A.H. acknowledges support by the Research Council of Finland: Academy Research Fellow (Project No. 338408) and the Centre of Excellence of Inverse Modelling and Imaging project (Project No. 353093).

B.J. acknowledges the support by a start-up fund and Direct Grant of Research, both from The Chinese University of Hong Kong, Hong Kong General Research Fund (Project No. 14306423) and UK Engineering and Physical Research Council (EP/V026259/1).

P.M. acknowledges the financial support from the DFG project number 281474342: Graduiertenkolleg RTG 2224 Parameter Identification - Analysis, Algorithms, Applications.

References

  • [1] G. Alessandrini. Stable determination of conductivity by boundary measurements. Applicable Analysis, 27(1-3):153–172, 1988.
  • [2] H. Ammari, R. Griesmaier, and M. Hanke. Identification of small inhomogeneities: asymptotic factorization. Math. Comp., 76(259):1425–1448, 2007.
  • [3] H. Ammari, E. Iakovleva, and D. Lesselier. A MUSIC algorithm for locating small inclusions buried in a half-space from the scattering amplitude at a fixed frequency. Multiscale Model. Simul., 3(3):597–628, 2005.
  • [4] V. Antun, F. Renna, C. Poon, and A. C. Hansen. On instabilities of deep learning in image reconstruction and the potential costs of AI. Proc. Nat. Acad. Sci., 117(48):30088–30095, 2020.
  • [5] S. Arridge, P. Maass, O. Öktem, and C.-B. Schönlieb. Solving inverse problems using data-driven models. Acta Numer., 28:1–174, 2019.
  • [6] K. Astala, D. Faraco, and L. Székelyhidi, Jr. Convex integration and the LpL^{p} theory of elliptic equations. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 7(1):1–50, 2008.
  • [7] K. Astala and L. Päivärinta. Calderón’s inverse conductivity problem in the plane. Ann. of Math. (2), 163(1):265–299, 2006.
  • [8] G. Bao, X. Ye, Y. Zang, and H. Zhou. Numerical solution of inverse problems by weak adversarial networks. Inverse Problems, 36(11):115003, 2020.
  • [9] L. Bar and N. Sochen. Strong solutions for pde-based tomography by unsupervised learning. SIAM J. Imag. Sci., 14(1):128–155, 2021.
  • [10] J. Barzilai and J. M. Borwein. Two-point step size gradient methods. IMA J. Numer. Anal., 8(1):141–148, 1988.
  • [11] J. Bikowski and J. L. Mueller. 2D EIT reconstructions using Calderón’s method. Inverse Probl. Imaging, 2(1):43–61, 2008.
  • [12] J. Bohr. A Bernstein–von-Mises theorem for the Calderón problem with piecewise constant conductivities. Inverse Problems, 39(1):015002, 18, 2023.
  • [13] T. Bonesky, K. Bredies, D. A. Lorenz, and P. Maass. A generalized conditional gradient method for nonlinear operator equations with sparsity constraints. Inverse Problems, 23(5):2041–2058, 2007.
  • [14] L. Borcea. Electrical impedance tomography. Inverse Problems, 18(6):R99–R136, 2002.
  • [15] L. Borcea, G. A. Gray, and Y. Zhang. Variationally constrained numerical solution of electrical impedance tomography. Inverse Problems, 19(5):1159–1184, 2003.
  • [16] A. Borsic, B. M. Graham, A. Adler, and W. R. B. Lionheart. In vivo impedance imaging with total variation regularization. IEEE Trans. Med. Imag., 29(1):44–54, 2010.
  • [17] K. Bredies, D. A. Lorenz, and P. Maass. A generalized conditional gradient method and its connection to an iterative shrinkage method. Comput. Optim. Appl., 42(2):173–193, 2009.
  • [18] M. Brühl and M. Hanke. Numerical implementation of two noniterative methods for locating inclusions by impedance tomography. Inverse Problems, 16(4):1029–1042, 2000.
  • [19] M. Brühl, M. Hanke, and M. S. Vogelius. A direct impedance tomography algorithm for locating small inhomogeneities. Numer. Math., 93(4):635–654, 2003.
  • [20] A.-P. Calderón. On an inverse boundary value problem. In Seminar on Numerical Analysis and its Applications to Continuum Physics (Rio de Janeiro, 1980), pages 65–73. Soc. Brasil. Mat., Rio de Janeiro, 1980.
  • [21] S. Cen, B. Jin, K. Shin, and Z. Zhou. Electrical impedance tomography with deep Calderon method. J. Comput. Phys., 493:112427, 2023.
  • [22] S. Chaabane, C. Elhechmi, and M. Jaoua. Error estimates in smoothing noisy data using cubic B-splines. C. R. Math. Acad. Sci. Paris, 346(1-2):107–112, 2008.
  • [23] Z. Chen, Z. Liu, L. Ai, S. Zhang, and Y. Yang. Mask-guided spatial–temporal graph neural network for multifrequency electrical impedance tomography. IEEE Trans. Instrum. Meas., 71:4505610, 2022.
  • [24] Z. Chen, J. Xiang, P.-O. Bagnaninchi, and Y. Yang. MMV-Net: A multiple measurement vector network for multifrequency electrical impedance tomography. IEEE Trans. Neural Networks Learn. System, page in press, 2022.
  • [25] Z. Chen and Y. Yang. Structure-aware dual-branch network for electrical impedance tomography in cell culture imaging. IEEE Trans. Instrum. Meas., 70:1–9, 2021.
  • [26] Z. Chen, Y. Yang, J. Jia, and P. Bagnaninchi. Deep learning based cell imaging with electrical impedance tomography. In 2020 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), pages 1–6, 2020.
  • [27] M. Cheney and D. Isaacson. Distinguishability in impedance imaging. IEEE Trans. Biomed. Imag., 39(8):852–860, 1992.
  • [28] M. Cheney, D. Isaacson, J. C. Newell, S. Simske, and J. Goble. NOSER: An algorithm for solving the inverse conductivity problem. Int. J. Imag. Syst. Tech., 2:66–75, 1990.
  • [29] Y. T. Chow, K. Ito, and J. Zou. A direct sampling method for electrical impedance tomography. Inverse Problems, 30(9):095003, 2014.
  • [30] E. T. Chung, T. F. Chan, and X.-C. Tai. Electrical impedance tomography using level set representation and total variational regularization. J. Comput. Phys., 205(1):357–372, 2005.
  • [31] F. Colibazzi, D. Lazzaro, S. Morigi, and A. Samoré. Deep-plug-and-play proximal gauss-newton method with applications to nonlinear, ill-posed inverse problems. Inverse Probl. Imaging, pages 0–0, 2023.
  • [32] Y.-H. Dai, W. W. Hager, K. Schittkowski, and H. Zhang. The cyclic Barzilai-Borwein method for unconstrained optimization. IMA J. Numer. Anal., 26(3):604–627, 2006.
  • [33] I. Daubechies, M. Defrise, and C. De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm. Pure Appl. Math., 57(11):1413–1457, 2004.
  • [34] M. M. Dunlop and A. M. Stuart. The Bayesian formulation of EIT: analysis and algorithms. Inverse Probl. Imaging, 10(4):1007–1036, 2016.
  • [35] H. W. Engl, M. Hanke, and A. Neubauer. Regularization of inverse problems. Kluwer Academic Publishers Group, Dordrecht, 1996.
  • [36] L. D. Faddeev. Increasing solutions of the Schrödinger equation. Sov.-Phys. Dokl., 10:1033–5, 1966.
  • [37] Y. Fan and L. Ying. Solving electrical impedance tomography with deep learning. J. Comput. Phys., 404:109119, 2020.
  • [38] M. Gehre and B. Jin. Expectation propagation for nonlinear inverse problems–with an application to electrical impedance tomography. J. Comput. Phys., 259:513–535, 2014.
  • [39] M. Gehre, B. Jin, and X. Lu. An analysis of finite element approximation in electrical impedance tomography. Inverse Problems, 30(4):045013, 2014.
  • [40] M. Gehre, T. Kluth, A. Lipponen, B. Jin, A. Seppänen, J. P. Kaipio, and P. Maass. Sparsity reconstruction in electrical impedance tomography: an experimental evaluation. J. Comput. Appl. Math., 236(8):2126–2136, 2012.
  • [41] R. Guo, S. Cao, and L. Chen. Transformer meets boundary value inverse problems. In The Twelfth International Conference on Learning Representations, 2023.
  • [42] R. Guo and J. Jiang. Construct deep neural networks based on direct sampling methods for solving electrical impedance tomography. SIAM J. Sci. Comput., 43(3):B678–B711, 2021.
  • [43] R. Guo, J. Jiang, and Y. Li. Learn an index operator by cnn for solving diffusive optical tomography: A deep direct sampling method. J. Sci. Comput., 95(1):31, 2023.
  • [44] S. J. Hamilton, A. Hänninen, A. Hauptmann, and V. Kolehmainen. Beltrami-net: domain-independent deep d-bar learning for absolute imaging with electrical impedance tomography (a-EIT). Physiol. Meas., 40(7):074002, 2019.
  • [45] S. J. Hamilton and A. Hauptmann. Deep D-bar: Real-time electrical impedance tomography imaging with deep neural networks. IEEE Trans. Med. Imag., 37(10):2367–2377, 2018.
  • [46] S. J. Hamilton, A. Hauptmann, and S. Siltanen. A data-driven edge-preserving D-bar method for electrical impedance tomography. Inverse Probl. Imaging, 8(4):1053–1072, 2014.
  • [47] B. Harrach and M. Ullrich. Monotonicity-based shape reconstruction in electrical impedance tomography. SIAM J. Math. Anal., 45(6):3382–3403, 2013.
  • [48] W. Herzberg, A. Hauptmann, and S. J. Hamilton. Domain independent post-processing with graph u-nets: Applications to electrical impedance tomographic imaging. arXiv preprint arXiv:2305.05020, 2023.
  • [49] W. Herzberg, D. B. Rowe, A. Hauptmann, and S. J. Hamilton. Graph convolutional networks for model-based learning in nonlinear inverse problems. IEEE Trans. Comput. Imag., 7:1341–1353, 2021.
  • [50] M. Hinze, B. Kaltenbacher, and T. N. T. Quyen. Identifying conductivity in electrical impedance tomography with total variation regularization. Numer. Math., 138(3):723–765, 2018.
  • [51] D. S. Holder, editor. Electrical Impedance Tomography: Methods, History and Applications. Institute of Physics Publishing, Bristol, 2004.
  • [52] S.-W. Huang, H.-M. Cheng, and S.-F. Lin. Improved imaging resolution of electrical impedance tomography using artificial neural networks for image reconstruction. In 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pages 1551–1554. IEEE, 2019.
  • [53] N. Hyvönen. Complete electrode model of electrical impedance tomography: approximation properties and characterization of inclusions. SIAM J. Appl. Math., 64(3):902–931, 2004.
  • [54] M. Ikehata. Size estimation of inclusion. J. Inverse Ill-Posed Probl., 6(2):127–140, 1998.
  • [55] V. Isakov. Inverse Problems for Partial Differential Equations. Springer, New York, 2nd edition, 2006.
  • [56] K. Ito and B. Jin. Inverse problems: Tikhonov theory and algorithms. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2015.
  • [57] B. Jin, T. Khan, P. Maass, and M. Pidcock. Function spaces and optimal currents in impedance tomography. J. Inverse Ill-Posed Probl., 19(1):25–48, 2011.
  • [58] B. Jin and P. Maass. An analysis of electrical impedance tomography with applications to tikhonov regularization. ESAIM: Control, Optim. Cal. Var., 18(4):1027–1048, 2012.
  • [59] B. Jin and P. Maass. Sparsity regularization for parameter identification problems. Inverse Problems, 28(12):123001, nov 2012.
  • [60] B. Jin, P. Maass, and O. Scherzer. Sparsity regularization in inverse problems. Inverse Problems, 33(6):060301, 2017.
  • [61] B. Jin and Y. Xu. Adaptive reconstruction for electrical impedance tomography with a piecewise constant conductivity. Inverse Problems, 36(1):014003, 2019.
  • [62] B. Jin, Y. Xu, and J. Zou. A convergent adaptive finite element method for electrical impedance tomography. IMA J. Numer. Anal., 37(3):1520–1550, 2017.
  • [63] K. H. Jin, M. T. McCann, E. Froustey, and M. Unser. Deep convolutional neural network for inverse problems in imaging. IEEE Trans. Image Proc., 26(9):4509–4522, 2017.
  • [64] J. P. Kaipio, V. Kolehmainen, E. Somersalo, and M. Vauhkonen. Statistical inversion and Monte Carlo sampling methods in electrical impedance tomography. Inverse Problems, 16(5):1487–1522, 2000.
  • [65] T. A. Khan and S. H. Ling. Review on electrical impedance tomography: Artificial intelligence methods and its applications. Algorithms, 12(5):88, 2019.
  • [66] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In 3rd International Conference for Learning Representations, San Diego, 2015.
  • [67] I. Knowles. A variational algorithm for electrical impedance tomography. Inverse Problems, 14(6):1513–1525, 1998.
  • [68] K. Knudsen, M. Lassas, J. L. Mueller, and S. Siltanen. Regularized D-bar method for the inverse conductivity problem. Inverse Probl. Imaging, 3(4):599–624, 2009.
  • [69] R. V. Kohn and A. McKenney. Numerical implementation of a variational method for electrical impedance tomography. Inverse Problems, 6(3):389–414, 1990.
  • [70] A. Lechleiter. The MUSIC algorithm for impedance tomography of small inclusions from discrete data. Inverse Problems, 31(9):095004, 19, 2015.
  • [71] A. Lechleiter and A. Rieder. Newton regularizations for impedance tomography: a numerical study. Inverse Problems, 22(6):1967–1987, 2006.
  • [72] A. Lechleiter and A. Rieder. Newton regularizations for impedance tomography: convergence by local injectivity. Inverse Problems, 24(6):065009, 18, 2008.
  • [73] X. Li, R. Lu, Q. Wang, J. Wang, X. Duan, Y. Sun, X. Li, and Y. Zhou. One-dimensional convolutional neural network (1d-cnn) image reconstruction for electrical impedance tomography. Rev. Sci. Instrument., 91(12), 2020.
  • [74] X. Li, Y. Lu, J. Wang, X. Dang, Q. Wang, X. Duan, and Y. Sun. An image reconstruction framework based on deep neural network for electrical impedance tomography. In 2017 IEEE International Conference on Image Processing (ICIP), pages 3585–3589. IEEE, 2017.
  • [75] Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, and A. Anandkumar. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020.
  • [76] D. Liu, J. Wang, Q. Shan, D. Smyl, J. Deng, and J. Du. Deepeit: deep image prior enabled electrical impedance tomography. IEEE Trans. Pattern Anal. Mach. Intell., 45(8):9627–9638, 2023.
  • [77] L. Lu, P. Jin, G. Pang, Z. Zhang, and G. E. Karniadakis. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. Nature Mach. Int., 3(3):218–229, 2021.
  • [78] M. Lukaschewitsch, P. Maass, and M. Pidcock. Tikhonov regularization for electrical impedance tomography on unbounded domains. Inverse Problems, 19(3):585–610, 2003.
  • [79] S. Martin and C. T. Choi. A post-processing method for three-dimensional electrical impedance tomography. Sci. Rep., 7(1):7212, 2017.
  • [80] N. G. Meyers. An LpL^{p}e-estimate for the gradient of solutions of second order elliptic divergence equations. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (3), 17:189–206, 1963.
  • [81] V. Monga, Y. Li, and Y. C. Eldar. Algorithm unrolling: interpretable, efficient deep learning for signal and image processing. IEEE Signal Proc. Magaz., 38(2):18–44, 2021.
  • [82] J. L. Mueller and S. Siltanen. The D-bar method for electrical impedance tomography—demystified. Inverse Problems, 36(9):093001, 28, 2020.
  • [83] A. I. Nachman. Global uniqueness for a two-dimensional inverse boundary value problem. Ann. of Math. (2), 143(1):71–96, 1996.
  • [84] J. W. Neuberger. Sobolev gradients and differential equations, volume 1670 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1997.
  • [85] D. Nganyu Tanyu, J. Ning, T. Freudenberg, N. Heilenkoetter, A. Rademacher, U. Iben, and P. Maass. Deep learning methods for partial differential equations and related parameter identification problems. Inverse Problems, 39(10):103001, aug 2023.
  • [86] J. Ning, F. Han, and J. Zou. A direct sampling-based deep learning approach for inverse medium scattering problems. arXiv preprint arXiv:2305.00250, 2023.
  • [87] R. G. Novikov. A multidimensional inverse spectral problem for the equation Δψ+(v(x)Eu(x))ψ=0-\Delta\psi+(v(x)-Eu(x))\psi=0. Funktsional. Anal. i Prilozhen., 22(4):11–22, 96, 1988.
  • [88] A. Pokkunuru, P. Rooshenas, T. Strauss, A. Abhishek, and T. Khan. Improved training of physics-informed neural networks using energy-based priors: a study on electrical impedance tomography. In The Eleventh International Conference on Learning Representations, 2022.
  • [89] M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys., 378:686–707, 2019.
  • [90] S. Ren, R. Guan, G. Liang, and F. Dong. RCRC: A deep neural network for dynamic image reconstruction of electrical impedance tomography. IEEE Trans. Instrum. Meas., 70:1–11, 2021.
  • [91] L. Rondi. Discrete approximation and regularisation for the inverse conductivity problem. Rend. Istit. Mat. Univ. Trieste, 48:315–352, 2016.
  • [92] L. Rondi and F. Santosa. Enhanced electrical impedance tomography via the Mumford-Shah functional. ESAIM Control Optim. Calc. Var., 6:517–538, 2001.
  • [93] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.
  • [94] T. Schuster, B. Kaltenbacher, B. Hofmann, and K. S. Kazimierski. Regularization methods in Banach spaces. Walter de Gruyter GmbH & Co. KG, Berlin, 2012.
  • [95] J. Seo, K. Kim, A. Jargal, K. Lee, and B. Harrach. A learning-based method for solving ill-posed nonlinear inverse problems: A simulation study of lung eit. SIAM Journal on Imaging Sciences, 12(3):1275–1295, 2019.
  • [96] K. Shin and J. L. Mueller. A second order Calderón’s method with a correction term and a priori information. Inverse Problems, 36(12):124005, 22, 2020.
  • [97] S. Siltanen and T. Ide. Electrical impedance tomography, enclosure method and machine learning. In 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP), pages 1–6. IEEE, 2020.
  • [98] S. Siltanen, J. Mueller, and D. Isaacson. An implementation of the reconstruction algorithm of A. Nachman for the 2D inverse conductivity problem. Inverse Problems, 16(3):681–699, 2000.
  • [99] D. Smyl, T. N. Tallman, D. Liu, and A. Hauptmann. An efficient quasi-newton method for nonlinear inverse problems via learned singular values. IEEE Signal Processing Letters, 28:748–752, 2021.
  • [100] E. Somersalo, M. Cheney, and D. Isaacson. Existence and uniqueness for electrode models for electric current computed tomography. SIAM J. Appl. Math., 52(4):1023–1040, 1992.
  • [101] B. Sun, H. Zhong, Y. Zhao, L. Ma, and H. Wang. Calderón’s method-guided deep neural network for electrical impedance tomography. IEEE Trans. Instrum. Meas., page in press, 2023.
  • [102] C. Tan, S. Lv, F. Dong, and M. Takei. Image reconstruction based on convolutional neural network for electrical resistance tomography. IEEE Sensors J., 19(1):196–204, 2018.
  • [103] T. Tripura and S. Chakraborty. Wavelet neural operator: a neural operator for parametric partial differential equations. arXiv preprint arXiv:2205.02191, 2022.
  • [104] G. Uhlmann. Electrical impedance tomography and Calderón’s problem. Inverse Problems, 25(12):123011, 39, 2009.
  • [105] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Deep image prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9446–9454, 2018.
  • [106] Z. Wei, D. Liu, and X. Chen. Dominant-current deep learning scheme for electrical impedance tomography. IEEE Trans. Biomed. Eng., 66(9):2546–2555, 2019.
  • [107] A. Wexler, B. Fry, and N. M. Impedance-computed tomography algorithm and system. Appl. Opt., 25:3985–92, 1985.
  • [108] S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo. Sparse reconstruction by separable approximation. IEEE Trans. Signal Process., 57(7):2479–2493, 2009.
  • [109] Y. Wu, B. Chen, K. Liu, C. Zhu, H. Pan, J. Jia, H. Wu, and J. Yao. Shape reconstruction with multiphase conductivity for electrical impedance tomography using improved convolutional neural network method. IEEE Sensors J., 21(7):9277–9287, 2021.
  • [110] D. Yang, S. Li, Y. Zhao, B. Xu, and W. Tian. An eit image reconstruction method based on densenet with multi-scale convolution. Math. Biosci. Eng., 20(4):7633–7660, 2023.
  • [111] X. Zhang, Z. Wang, R. Fu, D. Wang, X. Chen, X. Guo, and H. Wang. V-shaped dense denoising convolutional neural network for electrical impedance tomography. IEEE Trans. Instrum. Meas., 71:1–14, 2022.
  • [112] Z. Zhou, G. S. dos Santos, T. Dowrick, J. Avery, Z. Sun, H. Xu, and D. S. Holder. Comparison of total variation algorithms for electrical impedance tomography. Physiol. Meas., 36(6):1193–1209, 2015.