This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Computational Image Enhancement for Frequency Modulated Continuous Wave (FMCW) THz Image

Tak Ming Wong Center for Sensor Systems (ZESS), University of Siegen, 57076 Siegen, Germany Computer Graphics and Multimedia Systems Group, University of Siegen, 57076 Siegen, Germany Matthias Kahl Center for Sensor Systems (ZESS), University of Siegen, 57076 Siegen, Germany Institute for High Frequency and Quantum Electronics (HQE), University of Siegen, 57068 Siegen, Germany Peter Haring Bolívar Center for Sensor Systems (ZESS), University of Siegen, 57076 Siegen, Germany Institute for High Frequency and Quantum Electronics (HQE), University of Siegen, 57068 Siegen, Germany Andreas Kolb Center for Sensor Systems (ZESS), University of Siegen, 57076 Siegen, Germany Computer Graphics and Multimedia Systems Group, University of Siegen, 57076 Siegen, Germany
(January 23, 2019)
Abstract

In this paper, a novel method to enhance Frequency Modulated Continuous Wave (FMCW) THz imaging resolution beyond its diffraction limit is proposed. Our method comprises two stages. Firstly, we reconstruct the signal in depth-direction using a sinc\operatorname*{sinc}-envelope, yielding a significant improvement in depth estimation and signal parameter extraction. The resulting high precision depth estimate is used to deduce an accurate reflection intensity THz image. This image is fed in the second stage of our method to a 2D blind deconvolution procedure, adopted to enhance the lateral THz image resolution beyond the diffraction limit. Experimental data acquired with a FMCW system operating at 577 GHz with a bandwidth of 126 GHz shows that the proposed method enhances the lateral resolution by a factor of 2.29 to 346.2um with respect to the diffraction limit. The depth accuracy is 91um. Interestingly, the lateral resolution enhancement achieved with this blind deconvolution concept leads to better results in comparison to conventional gaussian deconvolution. Experimental data on a PCB resolution target is presented, in order to quantify the resolution enhancement and to compare the performance with established image enhancement approaches. The presented technique allows exposure of the interwoven fibre reinforced embedded structures of the PCB test sample.

Keywords Terahertz, Frequency Modulated Continuous Wave (FMCW), resolution enhancement, deconvolution, parameter extraction

1 Introduction

Since its original inception in the early nineties [1], THz imaging has demonstrated a very large potential for contact-free analysis, nondestructive testing and stand-off detection in a wide variety of application fields, such as semiconductor industry, biology, medicine, material analysis, quality control, and security [2, 3, 4]. In many of these application fields, THz imaging is competing with established imaging methodologies, such as optical inspection or X-ray imaging. Compared with imaging in the optical or X-ray parts of the electromagnetic spectrum, THz imaging is significantly limited in its spatial resolution due to the substantially longer wavelength of the associated frequencies.

A wide range of technological approaches to realize THz imaging systems have been demonstrated, including femtosecond laser based scanning systems [1, 5], synthetic aperture systems [6, 7], and hybrid systems [8]. Several of these approaches using pulses or using frequency modulated THz signals, allow to sense temporal or phase shifts to the object’s surface, making 3D THz imaging possible.

Despite the fact that in most of these approaches THz imaging is performed close to the diffraction limit, the comparatively low spatial resolution associated with the THz radiation wavelengths significantly hampers the application range. There is a huge interest to increase the spatial resolution of this approach beyond the diffraction limit, in order to make this technique competitive in comparison to established methods such as X-ray imaging [9].

THz imaging below the diffraction limit is an emerging area [3], which can roughly be classified into two alternatives: by system enhancements or by computational approaches. System enhancements include for example, interferometric sensing [10] to increase the depth sensitivity in THz time-of-flight imaging, or near-field sensing approaches [11] which demonstrate a nanometer scale lateral resolution by circumventing the diffraction limit. Computational image enhancement techniques aim at improving the resolution by utilizing numerical procedures and additional signal or system information, e.g. from overlapping oversampled THz imaging signals, without introducing additional equipment effort and cost.

Depending on the THz image acquisition mode, there are several alternative approaches for computational image enhancement. THz imaging superresolution (also referred to as high-resolution or image restoration) is often associated to spatial resolution enhancement in the xy-direction [12, 13, 14, 15, 16]. In contrast, depth resolution enhancement is associated to improvement in azimuth direction (z-direction) [17, 18, 19].

In this paper, we propose a novel method to enhance THz image resolution beyond diffraction limit, attaining a lateral (xy) resolution increase and a depth (z) accuracy increase. The concept is demonstrated for a Frequency Modulated Continuous Wave (FMCW) THz scanning system operating at 514640514-640GHz, but can certainly be adapted to other THz imaging techniques. Our approach comprises the following technical contributions:

  • Parameter extraction by complex signal fitting model in z-direction for each pixel which allows for the acquisition of non-planar targets, incorporating

    • an accurate estimation of the per-pixel distance to the object surface, and

    • a proper reconstruction of the reflection intensity as a measure for the object’s material-properties.

  • Based on the accurately reconstructed reflection intensity, we are able to apply more complex, state-of-art 2D blind deconvolution techniques in order to improve the spatial xy-resolution beyond what is achievable with traditional (e.g. gaussian kernel) deconvolution procedures.

Sections 3 and 4 give a brief overview of the THz 3D imaging system and the proposed method, respectively. The details of the curve fitting procedure are described in Sec. 5. In Sec. 7, the evaluation of the computational result of the proposed method are depicted.

2 Prior Work

The majority of prior research focuses on the lateral resolution of 2D THz images, where the Lucy-Richardson deconvolution algorithm [20, 21] is one of the most frequently used methods. Knobloch et al. [22] firstly applied Lucy-Richardson deconvolution on THz images. Xu et al. [12] proposed a THz time-domain spectroscopy (THz-TDS) image high-resolution reconstruction model incorporating a 2D wavelet decomposition and a Lucy-Richardson deconvolution to reconstruct a high-resolution THz image from four low-resolution images and to reconstruct a high-resolution image from single degraded 2D low-resolution image. Li et al. [13] proposed to use the Lucy-Richardson deconvolution algorithm for a coherent THz 2D imaging system. Ding et al. [14] used the Lucy-Richardson deconvolution for a THz reflective 2D imaging system.

In addition to a 2D deconvolution algorithm, Hou et al. [15] proposed a method to enhance THz image quality by applying a Finite Impulse Response Filter in time domain. Ahi and Anwar [16] proposed a deconvolution method based on their THz image generation model for a THz far-field 2D imaging system. Xie et al. [23] proposed to use a Markov Random Field (MRF) model for THz superresolution.

For the THz blur kernel estimation, Ding et al. [7] proposed to use a Range Migration Algorithm (RMA). Gu et al. [24] proposed to use a physical model for THz point spread function (PSF) as a Gaussian kernel based on the optical parameters of a THz-TDS system.

To enhance the depth accuracy, Walker et al. [17] proposed a time domain deconvolution method to increase the time domain zero padding factor for a THz reflection imaging. Chen et al. [18] proposed a hybrid Frequency-Wavelet Domain Deconvolution (FWDD) method to improve calculation of impulse response functions. Takayanagi et al. [19] proposed a deconvolution method based on Wiener filtering in the time domain. Dong et al. [25] proposed a sparse deconvolution method of impulse response function on multi-layered structure.

In the context of THz 3D imaging reconstruction method Shift Migration (PSM) algorithm to reconstruct a 3D image, Sun et al. [26] proposed to extend PSM as Enhanced Phase Shift Migration (EPSM) to improve computational efficiency. Liu et al. [27] proposed a 3D Wavenumber Scaling Algorithm (3D WSA) to reconstruct the entire 3D holographic image.

To our best knowledge, the method in this paper is the first attempt that combines depth accuracy with lateral resolution enhancement in order to achieve a jointly improved spatial resolution and accuracy in both, xyxy- and zz-direction.

3 Overview of Terahertz 3D imaging system

Refer to caption
Refer to caption
Figure 1: THz 3D imaging geometry (top) and photograph of the THz 3D imaging unit (bottom)
Refer to caption
Figure 2: Schematic of the THz 3D imaging system setup and the host PC

Our electronic THz imaging system is based on hollow-waveguide multipliers and mixers, operating in a frequency modulated continuous wave (FMCW) mode for measuring depth information. The components are operating around a frequency of 577577GHz with a bandwidth of 126126GHz. More details on the experimental approach are described in[7].

Fig. 1 shows the imaging geometry. Both transmitter (Tx) and receiver (Rx) are mounted on the same platform. The imaging unit, consisting of Tx, Rx and optical components, are moved along the x and y direction using stepper motors and linear stages. This imaging unit takes a depth profile of the object at each lateral position, in order to acquire a full 3D image. The data is acquired with a lateral step size of 262.5μm262.5\mu m in xy-direction. During measurement, the motor controller and the data acquisition are synchronized to enable on the fly measurements. An adequate integration time and velocity is chosen in order to provide enough time for the acquisition of 1400 samples per depth profile and 36 averages per sample. The total acquisition time for such an averaged depth profile is 55ms.

Transmitter (Tx) and receiver (Rx) are mounted in a monostatic geometry. A beam splitter and two hyperbolic lenses focus the beam radiated from the Tx and reflected from the sample into the Rx. By measuring minimum dimension which obtains more than 3dB intensity difference, the resolution of the setup is measured as 793.7μm793.7\mu m using a metallic USAF 1951 Resolving Power Test Target scaled to the THz frequency range, which is close to the theoretical ideal expectation of 622μm622\mu m (numerical aperture NA=0.508=0.508 @ 578578GHz). Due to the monostatic scanning approach no optical magnification occurs. The system’s depth resolution Δd\Delta d is defined by

Δd=c2×B\Delta d=\cfrac{c}{2\times B} (1)

where cc is the speed of light in air, and BB is the system bandwidth. For our system, we have Δd=1210μm\Delta d=1210\mu m

For the FMCW operation, a voltage controlled oscillator (VCO) is tuned from 14.2817.7814.28-17.78GHz (see Fig. 2). The signal at the output of the VCO is distributed to the Tx and the Rx using a power-splitter. For transmission the signal is then upconverted to 514640514-640GHz with a chain of multipliers. After downconversion to the intermediate frequency (IF) range using a sub-harmonic mixer, which is fed with the 6th harmonic of the VCO signal, the signal is digitized with 10 MS (mega-samples) per second sampling rate and transferred to the host PC. An additional delay between the Tx and Rx path creates a frequency offset in the intermediate frequency signal for proper data acquisition.

4 Proposed method

4.1 Overview

Refer to caption
Figure 3: Metal on PCB board test target fabricated according to USAF 1951 MIL-STD-150A standard, where the group and row numbers indicate the lines per millimeter according to [28]. Group 0 Element 4 for Sec. 7.3.1 is indicated.
Preprocessing Parameter extraction Deconvolution Find max magnitude position Complex signal Deconvolution Magnitude fitting Phase initialization Complex fitting Reconstruction Zero-padding FFT Output Image uuuku_{k}Iu,u^I_{u},\hat{u}zmaxz_{\text{max}}Am,μm,σmA_{m},\mu_{m},\sigma_{m}Am,μm,σm,ϕmA_{m},\mu_{m},\sigma_{m},\phi_{m}A,μ,σ,ϕA,\mu,\sigma,\phidv,Ivd_{v},I_{v}IdI_{d}
Figure 4: Block diagram of proposed method (see intensity images in Fig. 9 and Fig. 13)

In the previous THz 3D imaging system [7], the signal was assumed to have an ideal flat target with perfect orthogonal alignment to the THz sensor. However, perfect planarity and orthogonality require a high precision of the manufacturing procedure and calibration of the acquisition setup. To study more realistic THz imaging scenarios, we allow for non-planar targets (see Figs. 3 and 5), which are not perfectly orthogonally aligned to the sensor, i.e. that the distance between sensor to a lateral pixel in xy-direction is an unknown variable.

Fig. 4 depicts our method, which comprises three major components: pre-processing, parameter extraction and deconvolution. In the pre-processing part, the measured complex signal is interpolated by zero-padding to obtain more sampling points in zz-direction. In the parameter extraction part, a complex model is fitted to the in-phase and quadrature components of the signal in z-direction for each lateral position. From this fitting, we deduce corrected reflectance complex field signal and depth information. In the deconvolution part, we process the reconstructed 2D image with deconvolution algorithm to form a high resolution image in xy-domain.

4.2 Preprocessing

With the initial input data to our computational procedure, we have a complex THz signal acquired per-pixel with frequency ramping [29]. In this paper, u(x,y)u(x,y) is denoted as the measured complex signal at lateral position (x,y)(x,y) with discrete frequency parameter kk with length NzN_{z}.

In order to achieve sub-wavelength geometric correction, more sampling points on the z-axis are required for robust curve fitting. Based on the current acquisition system (see Sec. 3), an intuitive method is to interpolate the signal in the spatial domain, but the time domain signal provides another option. Instead of spatial interpolation, we extend the discrete signal by a factor of NN using zero-padding of the complex electric field signal u(x,y)[k]u(x,y)[k] in the frequency domain:

uN(x,y)[k]={u(x,y)[k],if k<Nz0,otherwise,u_{N}(x,y)[k]=\begin{cases}u(x,y)[k],&\mbox{if }k<N_{z}\\ 0,&\mbox{otherwise}\end{cases}, (2)

where NN is the zero-padding factor, and the length of uNu_{N} is D=NNzD=N\cdot N_{z}. In this paper, we use N=9N=9.

After zero-padding, the signal is transformed into the spatial domain by applying a deramp-FFT [29].

u^(x,y)={uN(x,y)}.\hat{u}(x,y)=\mathcal{F}\{u_{N}(x,y)\}. (3)

The resulting 3D image u^\hat{u} can be expressed as a 3D matrix in the spatial xyz-domain, representing per-pixel (x,y)(x,y) the complex reflectivity of THz energy in z-direction represented by the complex samples u^(x,y)[z0],,u^(x,y)[zD1]\hat{u}(x,y)[z_{0}],\ldots,\hat{u}(x,y)[z_{D-1}].

5 Parameter extraction

In this part, we apply per-pixel parameter extraction in z-direction in order to represent the measured complex signal by a complex reflection model. As each pixel is treated independently, we simplify notation by dropping the pixel-location using, e.g., u^[zi]\hat{u}[z_{i}] for u^(x,y)[zi]\hat{u}(x,y)[z_{i}].

Our FMCW-THz 3D imaging system is calibrated by amplitude normalization with respect to an ideal metallic reflector. Thus, we achieve an ideal rectangular frequency amplitude signal response, which, after being Fourier transformed, results in an ideal sinc\operatorname*{sinc} function v(z)v(z) as continuous spatial signal amplitude. In this model, we assume single layer reflection, and the extension to multi-layer reflection is part of future work. As we assume a single layer reflection, the complex signal model v(z)v(z) is modeled as a modulated sinc\operatorname*{sinc} function:

v(z)=Asinc(σ(zμ))ej(ωzϕ)where, sinc(t)={sin(πt)πtt01t=0\begin{split}&v(z)=A\cdot\operatorname*{sinc}(\sigma(z-\mu))\cdot e^{-j(\omega z-\phi)}\\[5.0pt] \mbox{where, }&\operatorname*{sinc}(t)=\begin{cases}\cfrac{\sin(\pi t)}{\pi t}&t\neq 0\\ 1&t=0\end{cases}\end{split} (4)

In Eq. (4), AA is the electric field amplitude, μ\mu and σ\sigma are the mean (i.e. the depth) and the width of the sinc\operatorname*{sinc} function, respectively, ω\omega is the angular frequency of the sinusoidal carrier and ϕ\phi is the depth-related phase shift. We formulate the parameter extraction as a complex curve fitting process with respect to optimizing and minimizing the energy function εc\varepsilon_{c}

εc=argminA,μ,σ,ϕzZf[(u^re[zi]vre(z|A,μ,σ,ϕ))2+(u^im[zi]vim(z|A,μ,σ,ϕ))2]\begin{split}\varepsilon_{c}=\arg\min_{A,\mu,\sigma,\phi}\sum_{z\in Z_{f}}&\left[\left(\hat{u}_{re}[z_{i}]-v_{re}(z|A,\mu,\sigma,\phi)\right)^{2}\right.\\ &\left.+\left(\hat{u}_{im}[z_{i}]-v_{im}(z|A,\mu,\sigma,\phi)\right)^{2}\right]\\ \end{split} (5)

where the subscripts re{re} and im{im} denote the real and the imaginary part of a complex number, respectively, and ZfZ_{f} is the fitting window (see Sec. 5.1).

Because of the highly non-linear optimization involved in the curve fitting process, a direct application of Eq. (5) to a non-linear solver potentially results in local minima and does not lead to robust results. Therefore, we apply the following optimization steps which achieve a robust complex curve fitting:

  1. 1.

    Estimate the signal’s maximum magnitude z-position zmz_{m} in order to localize the curve fitting window (see Sec. 5.1).

  2. 2.

    Apply a curve fitting to the magnitude signal leading to initial values for Am,μm,σmA_{m},\mu_{m},\sigma_{m} (see Sec. 5.2).

  3. 3.

    Estimate the initial phase value ϕm\phi_{m} using a phase matching with respect to the angle of complex signal u^\angle\hat{u} (see Sec. 5.3).

  4. 4.

    Based on the initial values Am,μm,σm,ϕmA_{m},\mu_{m},\sigma_{m},\phi_{m} the optimization is performed by minimizing the energy εc\varepsilon_{c} (see Eq. (5)) using the Trust Region Algorithm [30].

  5. 5.

    Reconstruct an intensity image IvI_{v} and an depth image dvd_{v} based on the curve fitting result (sec Sec. 5.4).

Our curve fitting approach significantly reduces the per-pixel intensity inhomogeneity (see Sec. 7.2.2). The subsequent lateral deconvolution algorithm discussed in Sec. 6 involves the numerical solution of an ill-posed inverse problem of finding the blur kernel and enhancing the image’s sharpness at the same time, which is very sensitive to noise. Therefore, correcting the intensity yields two advantages, i.e., it stabilizes the numerical deconvolution process and it prevents wrong interpretations of intensity variation as structural or material transitions. We will provide comparison in Fig. 13 and discuss the lateral resolution enhancement with and without the per-pixel parameter extraction in Sec. 7.3.1.

5.1 Curve Fitting Window

As we focus on the primary reflection signal, assuming that the geometric reflection energy concentrates on the first air-material interface, we locate the z-position that exhibits the maximum magnitude within a fitting window

Zf[zmaxτf,zmax+τf]Z_{f}\in[z_{\text{max}}-\tau_{f},z_{\text{max}}+\tau_{f}] (6)

with center

zmax=argmaxzi|u^[zi]|,z_{\text{max}}=\arg\max_{z_{i}}\left|\hat{u}[z_{i}]\right|, (7)

i.e., zmz_{m} is the maximum magnitude z-position in the complex spatial domain data. τf\tau_{f} is the half-width of the fitting window. The choice of τf\tau_{f} is discussed in Sec. 7.1.

5.2 Magnitude Curve Fitting

Since the complex model v(z)v(z) in Eq. (4) is non-linear and the optimization for εc\varepsilon_{c} in Eq. (5) is non-convex, the estimation of the initial parameters is critical in order to avoid local minima. A reliable initial estimate of the complex curve fitting parameters Am,μm,σmA_{m},\mu_{m},\sigma_{m} is deduced from a magnitude curve fitting.

The magnitude signal model vm(z)v_{m}(z) is derived from Eq. (4) and is expressed as

vm(z)=Am|sinc(σm(zμm))|,v_{m}(z)=A_{m}\cdot\left|\operatorname*{sinc}\left(\sigma_{m}(z-\mu_{m})\right)\right|, (8)

where AmA_{m} is the electric field amplitude based on signal magnitude, μm\mu_{m} is the center of sinc\operatorname*{sinc} function, and σm\sigma_{m} is the width. The magnitude curve fitting minimizes the energy function εm\varepsilon_{m} by the Trust-Region Algorithm [30]

εm=argminAm,μm,σmzZf(|u^[zi]|vm(z|Am,μm,σm))2.\varepsilon_{m}=\arg\min_{A_{m},\mu_{m},\sigma_{m}}\sum_{z\in Z_{f}}\left(|\hat{u}[z_{i}]|-v_{m}(z|A_{m},\mu_{m},\sigma_{m})\right)^{2}. (9)

After the magnitude curve fitting, Am,μm,σmA_{m},\mu_{m},\sigma_{m} are the initial values for A,μ,σA,\mu,\sigma with respect to the complex signal model in Eq. (4). However, an estimate for the phase angle ϕm\phi_{m} is still required.

5.3 Estimating the Initial Phase Value ϕm\phi_{m}

We assume that, within the fitting window ZfZ_{f}, the phase angle ωzϕ\omega z-\phi in our model (see Eq. (4)) matches the phase angle u^\angle\hat{u} in the spatial domain data u^[zi]\hat{u}[z_{i}]. The corresponding optimization energy functional measures the linear deviation between these phase angles as follows

εϕ=argminϕm12zZf[(cos(ϕmωz)cos(u^))2+(sin(ϕmωz)sin(u^))2]\begin{split}\varepsilon_{\phi}=\arg\min_{\phi_{m}}\cfrac{1}{2}\sum_{z\in Z_{f}}{}&\left[\left(\cos(\phi_{m}-\omega z)-\cos(\angle\hat{u})\right)^{2}\right.\\ &\left.+\left(\sin(\phi_{m}-\omega z)-\sin(\angle\hat{u})\right)^{2}\right]\end{split} (10)

Setting the gradient of εϕ\varepsilon_{\phi} to zero and applying trigonometric identities, we obtain

sinϕmzZfcos(ωz+u^)=cosϕmzZfsin(ωz+u^)\sin\phi_{m}\sum_{z\in Z_{f}}{\cos(\omega z+\angle\hat{u})}=\cos\phi_{m}\sum_{z\in Z_{f}}{\sin(\omega z+\angle\hat{u})} (11)

In Eq. (11), the initial phase angle ϕm\phi_{m} is independent from the data angle u^\angle\hat{u}. Therefore, the minimum to Eq. (10) is found by solving for ϕm\phi_{m} in Eq. (11), yielding

ϕm=tan1zZfsin(ωz+u^)zZfcos(ωz+u^)\phi_{m}=\tan^{-1}\cfrac{\sum_{z\in Z_{f}}{\sin(\omega z+\angle\hat{u})}}{\sum_{z\in Z_{f}}{\cos(\omega z+\angle\hat{u})}} (12)

After this phase initialization, Am,μm,σmA_{m},\mu_{m},\sigma_{m} and ϕm\phi_{m} are given as initial values for A,μ,σA,\mu,\sigma and ϕ\phi in the model in Eq. (4), respectively, and the model is fitted according to the energy function εc\varepsilon_{c} in Eq. (5) using the Trust Region Algorithm [30].

After the complex curve fitting, the four different parameters of our model in Eq. (4) are extracted: amplitude AA, mean μ\mu, width σ\sigma, and phase ϕ\phi. Because of scattering and multi-layer reflection, error exists if we fit in an ideal sinc-function. Therefore, σ\sigma is extracted as a varying parameter to indicate the error. The depth parameter μ\mu is evalauted in Sec. 7.2. The amplitude parameter AA is further processed in a 2D approach. The processing method on all other parameters (μ\mu, σ\sigma, ϕ\phi) will be investigated in future research.

5.4 Intensity Image Reconstruction

Next, we have to extract the per-pixel intensities using our model. We, therefore, first define the reference intensity image IuI_{u} based on the input data u^\hat{u} as the intensity of the z-slice with the maximum average magnitude:

Iu(x,y)=u^[zmean]u^[zmean]where, zmean=argmaxzix,y|u^(x,y)[zi]|NxNy\begin{split}&I_{u}(x,y)=\hat{u}[z_{mean}]\cdot\hat{u}[z_{mean}]^{*}\\[5.0pt] \mbox{where, }&z_{mean}=\arg\max_{z_{i}}\cfrac{\sum_{x,y}{\left|\hat{u}(x,y)[z_{i}]\right|}}{N_{x}N_{y}}\end{split} (13)

where u^\hat{u}^{*} is the complex conjugate of u^\hat{u} and Nx,NyN_{x},N_{y} are the size of the matrix in x-axis and y-axis, respectively. The reconstructed intensity is deduced from the curve fitted data model and is defined as the intensity of the model’s signal (Eq. (4)) at the center position μ\mu:

Iv(x,y)=v(x,y,μ)v(x,y,μ)=A2(x,y)sinc2(0)=A2(x,y)\begin{split}I_{v}(x,y)&=v(x,y,\mu)\cdot v(x,y,\mu)^{*}\\ &=A^{2}(x,y)\cdot sinc^{2}(0)\\ &=A^{2}(x,y)\end{split} (14)

6 Deconvolution

After the curve fitting, the reconstructed intensity IvI_{v} is a 2D image with real and positive values. We can now apply a state-of-art 2D deconvolution algorithm to enhance the xy-domain resolution. In contrast to prior work we use a total variation (TV) blind deconvolution algorithm to improve the spatial resolution [31, 32].

In deconvolution image model, our reconstructed intensity image IvI_{v} can be expressed as a blurred observation of a sharp image IdI_{d}

Iv=Idh+η,I_{v}=I_{d}\ast h+\eta, (15)

where hh is the spatially invariant point spread function (PSF) (also known as blur kernel) and η\eta is the noise; \ast denotes convolution. Blind deconvolution methods allow for estimating the blur kernel directly from the data, which is, however, an ill-posed inverse problem that requires a prior knowledge in order to deduce a robust result [33]. In this paper, we utilize the sparse nature of intensity gradients [34] and choose a state-of-art TV blind deconvolution algorithm that minimizes

(Id,h)=argminId,hIdhIv1+λId1.\left(I_{d},h\right)=\arg\min_{I_{d},h}\|I_{d}\ast h-I_{v}\|_{1}+\lambda\|\nabla I_{d}\|_{1}. (16)

Here, IdhIv1\|I_{d}\ast h-I_{v}\|_{1} is commonly referred as a data term, λ\lambda is a regularization parameter and Id1\|\nabla I_{d}\|_{1} is the TV-regularization (or prior) that enforces the gradient of the resulting deblurred image IdI_{d} to be sparse. As by-product, the blind deconvolution yields the estimated PSF hh. We obtain our final results using the implementation from Xu et al. [31, 32].

7 Evaluation

Refer to caption
Figure 5: Metallic step object with a reference zero

In this section we evaluate our image enhancement approach using the following two datasets

  1. 1.

    MetalPCB: A nearly planar “USAF” target etched on a circuit board (Fig. 3). The dataset has been acquired using the setup described in Sec. 3 and has the resolution Nx=446N_{x}=446, Ny=446N_{y}=446, Nz=1400N_{z}=1400. The lateral per-pixel distance (i.e. the sensor mechanical movement distance) is 262.5μm262.5{\mu}m. The target is etched in the standard size scale of USAF target MIL-STD-150A.

  2. 2.

    StepChart: A metallic step chart with steps varying from 4000μm4000\mu m to 50μm50\mu m, and a reference plane to locate the reference zero position (Fig. 5). The dataset has also been acquired using the setup in Sec. 3 with the resolution Nx=575N_{x}=575, Ny=113N_{y}=113, Nz=1400N_{z}=1400. The lateral per-pixel distance is 262.5μm262.5{\mu}m.

It should be noted, that our system has a lateral resolution of 622μm622\mu m as its ideal diffraction limit, an experimentally measured lateral resolution 793.7μm793.7\mu m and a depth accuracy of Δd=1210μm\Delta d=1210\mu m as its diffraction limits (see Sec. 3).

We apply and compare the following images, reconstruction and image enhancement methods. The images are named as image_method-kernel\langle image\rangle\_\langle method\rangle\mbox{-}\langle kernel\rangle, where imageimage is the intensity image applying deconvolution methods, methodmethod is the method applied to the intensity image, and kernelkernel is the deblur kernel adopted in the deconvolution method.

  1. 1.

    Refer: The reference intensity image IuI_{u}; no reconstruction using curve fitting and no image enhancement applied.

  2. 2.

    Reconst: The reconstructed intensity image IvI_{v} using curve fitting; no image enhancement applied.

  3. 3.

    Refer_Xu: image enhancement using Xu et al. [31, 32] applied to reference intensity Refer.

  4. 4.

    Reconst_Xu: image enhancement using Xu et al. [31, 32] applied to reconstructed intensity Reconst. This is our proposed method.

  5. 5.

    Refer_LR-G: using Lucy-Richardson [20, 21] with gaussian kernel applied to reference intensity Refer.

  6. 6.

    Reconst_LR-G: using Lucy-Richardson [20, 21] with gaussian kernel applied to reconstructed intensity Reconst.

  7. 7.

    Refer_LR-Xu: using Lucy-Richardson [20, 21] with sparse kernel extracted from Xu et al. [31, 32] applied to reference intensity Refer.

  8. 8.

    Reconst_LR-Xu: using Lucy-Richardson [20, 21] with sparse kernel extracted from Xu et al. [31, 32] applied to reconstructed intensity Reconst.

In the following, we will first deduce the optimal window size for the quality control of the fitting using the MetalPCB dataset (Sec. 7.1). In Sec. 7.2, we will discuss the depth accuracy using the StepChart dataset. In Sec. 7.3, we will discuss the lateral resolution on MetalPCB dataset. All intensity images are normalized to a perfect metal reflection, and displayed using MATLAB’s perceptionally uniform colormap parula [35].

7.1 Window Size Optimization

Refer to caption
Figure 6: Average reflected intensity of u^\hat{u} in PCB region.
Refer to caption
Figure 7: Mean RMSE and maximum RMSE by fitting window width τf\tau_{f}.

In Fig. 6, the average intensity of u^\hat{u} by the z-axis in the PCB region of the MetalPCB dataset is shown. We can observe that, compared to the symmetric model in Eq. (4), the z-axis signal has a lower main lobe to side lobe ratio in the PCB region. This might be due to the superposition of signal reflection from the front and back PCB surfaces (see Fig. 3). This indicates that a large fitting window size τf\tau_{f} in Eq. (6) would corrupt any quantitative evaluation and should be avoided. On the other hand, a small fitting window size is also not feasible, as we need a sufficient number of sampling points to get a robust fitting result and to avoid over-fitting.

To obtain a reliable numeric measurement, we evaluate the fitting error for varying fitting window half-widths τf\tau_{f} using the Root-Mean-Square-Error (RMSE) between the fitted model vv (see Eq. (4)) and the spatial data u^\hat{u}:

RMSE(x,y)=zZf|u^(x,y)[z]v(x,y,z)|22τf+1\mbox{RMSE}(x,y)=\sqrt{\cfrac{\sum_{z\in Z_{f}}|\hat{u}(x,y)[z]-v(x,y,z)|^{2}}{2\tau_{f}+1}} (17)

As a measure for the full THz image, we calculate the mean and the maximum RMSE over all pixels

Mean RMSE=x,yRMSE(x,y)NxNyMax RMSE=maxx,y(RMSE(x,y))\begin{split}\mbox{Mean RMSE}&=\cfrac{\sum_{x,y}\mbox{RMSE}(x,y)}{N_{x}N_{y}}\\ \mbox{Max RMSE}&=\max_{x,y}(\mbox{RMSE}(x,y))\\ \end{split} (18)

In Fig 7, the mean and the maximum RMSE of the curve fitting with a different fitting window τf\tau_{f} are shown. We can see that the mean RMSE and maximum RMSE are both increasing when τf13\tau_{f}\leq 13, which is expected due to over-fitting. When the fitting window is increased to a larger value, we can observe that the mean RMSE is decreased steadily until τf=45\tau_{f}=45. The maximum RMSE, however, has no clear tendency and strongly varies beyond τf=28\tau_{f}=28. After considering that the mean and maximum RMSE are both considerably low when the window size is 45, we choose τf=45\tau_{f}=45 as our reference and optimal window size, which is used throughout the rest of the evaluation.

7.2 Depth Accuracy and Intensity Reconstruction

In this section, we analyzed the depth accuracy enhancement in the z-direction obtained by the proposed curve fitting method. In Sec. 7.2.1, the depth accuracy of the StepChart dataset using the maximum magnitude method and the proposed method are shown. In Sec. 7.2.2, we analyze the variance of a homogeneous metal cross section on the MetalPCB dataset.

7.2.1 Depth Accuracy

Refer to caption
Figure 8: Cross section depth of StepChart dataset by the proposed method and the step edges
Table 1: Depth difference, depth difference standard deviation (SD) and error comparison between maximum magnitude and the proposed curve fitting method
Depth Difference Relative Error
depthgt\mbox{depth}_{gt} depthmax\mbox{depth}_{\text{max}} depthμ\mbox{depth}_{\mu} errormax\mbox{error}_{\text{max}} errorμ\mbox{error}_{\mu}
mean SD mean SD
(μm\mu m) (μm\mu m) (μm\mu m) (μm\mu m) (μm\mu m) (%\%) (%\%)
4009.0 3898.5 7.2 3831.3 12.4 2.76 4.43
2987.0 2797.2 54.0 2810.8 13.0 6.35 5.90
2006.0 1908.7 53.6 1926.8 19.4 4.85 3.95
1004.0 941.1 0.0 958.6 23.4 6.26 4.53
903.0 806.7 0.0 815.3 17.5 10.67 9.71
803.0 792.8 40.9 742.0 14.0 1.27 7.60
703.0 633.8 77.3 665.5 20.5 9.84 5.33
600.0 590.0 65.6 561.9 19.5 1.66 6.35
472.0 403.3 0.0 468.8 13.9 14.55 0.68
410.0 403.3 0.0 391.0 13.7 1.63 4.64
298.0 268.9 0.0 287.6 14.7 9.77 3.48
208.0 268.9 0.0 192.2 15.1 -29.27 7.60
91.0 17.7 45.5 89.3 17.4 80.58 1.88
42.0 102.2 61.8 34.9 21.9 -143.28 16.84

In this part, we evaluate the depth accuracy using the StepChart dataset. In comparison to the depth of the proposed method depthμ\mbox{depth}_{\mu}, we obtain the depth depthmax\mbox{depth}_{\text{max}} using the maximum magnitude position zmaxz_{\text{max}} (Eq.(7)) of u^\hat{u}.

The z-positions are both calibrated to μm\mu m by the reference zero z-position z0z_{0}

depthμ=μz0NΔddepthmax=zmaxz0NΔd\begin{split}\mbox{depth}_{\mu}&=\cfrac{\mu-z_{0}}{N}\cdot\Delta d\\[5.0pt] \mbox{depth}_{\text{max}}&=\cfrac{z_{\text{max}}-z_{0}}{N}\cdot\Delta d\end{split} (19)

where NN is the zero-padding factor in Eq.(2), Δd=1210μm\Delta d=1210\mu m is the physical depth per sample, i.e. the systems depth resolution (see Eq. (1)).

Fig. 8, the cross section depth of StepChart dataset is plotted with an expected position of the edges. We can observe an interference effect due to signal superposition at several edges, most notably at x=50x=50mm and x=81x=81mm. Even though blind deconvolution can hardly resolve strong interference effects, spatially varying point-spread-function is required in order to cope with these kind of effects in the deconvolution stage; see also Hunsche et al. [36].

In order to circumvent interference effects, we extract and average depth values from the center 350 samples for each step. Then, we calculate the depth differences between adjacent steps. These depth differences are compared to the ground truth values, which is obtained by mechanical measurement. Tab. 1 depicts the depth differences and the corresponding standard deviation (SD) of the ground truth depthgt\mbox{depth}_{gt}, the maximum magnitude method depthmax\mbox{depth}_{\text{max}} and the proposed curve fitting depthμ\mbox{depth}_{\mu} (see Eq. (19)). In order to compare the depth accuracy, we calculate the relative error as

errorμ=depthμdepthgtdepthgterrormax=depthmaxdepthgtdepthgt\begin{split}\mbox{error}_{\mu}&=\cfrac{\mbox{depth}_{\mu}-\mbox{depth}_{gt}}{\mbox{depth}_{gt}}\\[5.0pt] \mbox{error}_{\text{max}}&=\cfrac{\mbox{depth}_{\text{max}}-\mbox{depth}_{gt}}{\mbox{depth}_{gt}}\end{split} (20)

In this paper, we consider a depth difference as resolvable when the relative error is below 10%10\% with a reasonably low deviation. Thus, our proposed method can still resolve the 91μm91\mu m depth difference, while the maximum magnitude method can only resolve depth difference up to 298μm298\mu m. As a result, the proposed curve fitting method enhances the system depth accuracy to 91μm91\mu m.

7.2.2 Intensity Reconstruction

Refer to caption
(a)
Refer to caption
(b)
Figure 9: Comparison between LABEL:sub@fig:intensity_u Reference intensity IuI_{u} (Refer) LABEL:sub@fig:intensity_v Reconstructed intensity IvI_{v} (Reconst). The homogeneous metal regions for Sec. 7.2.2 and the PCB region for Sec. 7.3.2 are indicated.
Refer to caption
Figure 10: Evaluation of the variance of the reflected intensities along the homogeneous metal regions in Fig. 9a and Fig. 9b in yy-direction: For each pixel-line (parameterized over the yy-position in mm) we compute the variance of the reference intensity IuI_{u} (Refer) and the reconstructed intensity IvI_{v} (Reconst) is given, showing the improvement resulting from the curve fitting in Sec. 5.

In this part we evaluate the reconstructed intensity image that is directly deduced from our enhanced depth accuracy according to Eq. (14). We, therefore, use the MetalPCB dataset that comprises large homogeneous copper regions. The MetalPCB target is, however, not perfectly flat and/or not perfectly aligned orthogonally.

Fig. 9 depicts the intensity images for the reference intensity IuI_{u} based on Eq. (13) (Fig. 9a) and the proposed reconstructed intensity IvI_{v} using Eq. (14) (Fig. 9b). We can observe that the copper regions do not appear to be fully homogeneous in reference intensity image IuI_{u}. After applying the high accuracy depth reconstruction, this intensity inhomogeneity is significantly reduced in the reconstructed image IvI_{v}.

Fig. 10 further analyzes the intensity homogeneity using a horizontal metal region (see Fig. 9, lower image part). The horizontal metal region consists of 18 pixel rows for each of which we compute the intensity variance in the reference intensity image IuI_{u} and the reconstructed intensity image IvI_{v}. We observe that the reconstruct intensity IuI_{u} has a significantly reduced intensity variance within all rows of copper, as the reconstruct intensity is focused on an accurate depth position for each lateral position.

Both, the visual and the numerical results in Fig. 9 and Fig. 10, respectively, demonstrate that our proposed intensity reconstruction method achieves significantly improved homogeneous intensities on homogeneous material regions without introducing additional distortions or noise.

7.3 Lateral Resolution and Analysis

In this section, we analyzed the lateral domain enhancement by the proposed method using the MetalPCB dataset. In Sec. 7.3.1, we compared the proposed method to Lucy-Richardson’s deconvolution method on behalf of the lateral resolution and in Sec. 7.3.2 we visually analyze the silk structure embedded in the PCB region.

7.3.1 Lateral resolution

Refer to caption
(a)
Refer to caption
(b)
Figure 11: Cross section intensity comparison at Group 0 Element 4 (see Fig. 3) for different deconvolution methods. This element represents a 353.6μm353.6\mu m line distance in LABEL:sub@fig:intensity_contrast_horizontal vertical direction LABEL:sub@fig:intensity_contrast_vertical horizontal direction
Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Figure 12: Comparison between LABEL:sub@fig:kernel_gauss gaussian kernel LABEL:sub@fig:kernel_sparse Xu’s sparse kernel LABEL:sub@fig:kernel_cross cross section of gaussian and Xu’s sparse kernel
Refer to caption
(a) Refer_LR-G(LR, gaussian; refer. int. IuI_{u})
Refer to caption
(b) Reconst_LR-G(LR, gaussian; reconst. int. IvI_{v})
Refer to caption
(c) Refer_LR-Xu(LR, Xu’s kernel; refer. int. IuI_{u})
Refer to caption
(d) Reconst_LR-Xu(LR, Xu’s kernel; reconst int. IvI_{v})
Refer to caption
(e) Refer_Xu(Xu; refer. int. IuI_{u})
Refer to caption
(f) Reconst_Xu(Xu; reconst. int. IvI_{v}) (proposed method)
Figure 13: Comparison between the three deconvolution methods (LABEL:sub@fig:intensity_u_lgLABEL:sub@fig:intensity_v_lg Lucy-Richardson (LR) [20, 21] with gaussian kernel, top row ; LABEL:sub@fig:intensity_u_lxLABEL:sub@fig:intensity_v_lx Lucy-Richardson with Xu’s kernel, middle row ; LABEL:sub@fig:intensity_udLABEL:sub@fig:intensity_vd Xu et al. (Xu)[31, 32], bottom row) applied to the reference intensity image IuI_{u} (left column) and the reconstructed intensity image IvI_{v} (right collumn). LABEL:sub@fig:intensity_vd depicts the proposed deconvolution method on our reconstructed intensity image using sparse kernel, and the PCB region for Sec. 7.3.2 is indicated.
Table 2: Lateral resolution enhancement based on the 3 db criterion using the USAF test target
Method Figure
Horizontal
resolution
(μm\mu m)
Horizontal
improve-
ment
Vertical
resolution
(μm\mu m)
Vertical
improve-
ment
Reconst(IvI_{v}) Fig. 9b 794.3 762.0
Reconst_LR-G Fig. 13b 419.4 1.89 393.3 1.94
Reconst_LR-Xu Fig. 13d 402.3 1.97 368.6 2.07
Reconst_Xu(IdI_{d})1 Fig. 13f 346.2 2.29 359.6 2.12
1 Reconst_Xu is the proposed method, IdI_{d}

In Fig. 13, we show different output intensity images with respect to the input intensity image, the deconvolution method and the blur kernel. The left column shows the deconvolution results based on the reference intensity image IuI_{u}, whereas the right column depicts the results using the reconstructed intensity image IvI_{v}. For deconvolution, we apply Xu et al.’s method [31, 32] (Refer_Xu, Reconst_Xu, top row) and Lucy-Richardson’s original method using a gaussian kernel based on the theoretical numeric aperture (Fig. 12a) (Refer_LR-G, Reconst_LR-G, center row), because the Lucy-Richardson’s deconvolution method with a gaussian kernel based on the system numerical aperture is a commonly applied THz deblurring method [12] [13] [14]. We, furthermore, extract the sparse kernel estimated by Xu et al.’s method (Fig. 12b) and plug it into the Lucy-Richardson approach (Refer_LR-Xu, Reconst_LR-Xu, bottom row). Note, that Reconst_Xu (Fig. 13f), is equal to IdI_{d}, the method proposed in this paper.

In Fig. 12a and Fig. 12b, the gaussian kernel and Xu’s sparse kernel computed by blind deconvolution are shown respectively. Obviously, the kernel estimated by Xu’s method models further effects related to the imaging system and/or the observed target that are not covered by the gaussian kernel; we will investigate these influences in our future work. In Fig. 12c, we can observe that Xu’s sparse kernel is not centered in the y-axis. This is because the blind deconvolution does not assume any symmetric and centralized property during kernel optimization, but instead the kernel weights are optimized as fully independent parameters throughout the process.

Comparing the results visually, we clearly see that Refer_LR-G, Reconst_LR-G (Lucy-Richardson with gaussian kernel), which has previously been used for THz resolution enhancement (see Sec. 2), yields inferior results in terms of sharpness and local contrast. Xu et al.’s method that estimates the blur kernel from the given intensity image yields much sharper images with improved local contrast (Refer_Xu, Reconst_Xu). Using Xu et al.’s kernel instead of the standard gaussian kernel clearly improves the results obtained by Lucy-Richardson (Refer_LR-Xu, Reconst_LR-Xu). On the downside, Xu et al.’s and Lucy-Richardson’s method with Xu’s kernel increase noise and overshooting effects. Xu et al.’ result is, however, less affected by these artefacts. Apparently, all three methods benefit from the enhanced intensity image using our reconstruction method.

Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Refer to caption
(d)
Figure 14: Comparison of intensity difference and modulation transfer function (MTF) by methods to dimensions LABEL:sub@fig:resolution_horizontal horizontal intensity difference LABEL:sub@fig:mtf_horizontal horizontal MTF LABEL:sub@fig:resolution_vertical vertical intensity difference LABEL:sub@fig:mtf_vertical vertical MTF

Moreover, Fig. 11 depicts the resolution enhancement using a cross section of group 0 element 4 of the USAF test target (see Fig. 3), representing a line distance of 353.6μm353.6\mu m. The cross section intensities before deconvolution (Reconst) and after deconvolution (Reconst_LR-G, Reconst_Xu, Reconst_LR-Xu) are shown respectively (refer to Fig. 9b, 13b,  13f and  13d). Only the proposed method Reconst_Xu can resolve this resolution according to the 3dB3~dB criterion. One should note that there are different definitions for resolution as discussed in [37]. However, the center position of Reconst_LR-Xu, Reconst_Xu are both shifted, because the sparse kernel Fig. 12b is not centered at the middle. Although blind deconvolution does not retain the absolute intensity position, relative intensity positions are constant because we assume a spatial invariant blur kernel.

In order to quantify a lateral resolution, we evaluate the contrast at each of the horizontal and vertical resolution patterns of the MetalPCB dataset (USAF target). In case of vertical stripes, we determine the minimal and maximal intensity values for each row crossing the pattern’s edges given its known geometric structure. After removing 10% of the cross sections in the boundary area, the mean value is computed as intensity difference (in dBdB). Analogously, we compute the intensity difference for the horizontal stripes. Fig. 14 shows the intensity difference by the number of lines per mm and the modulation transfer function (MTF) [38, 39] for all methods for vertical and horizontal resolutions. Commonly, to avoid simple enhancement by linear multiplication, a logarithmic measurement over 3dB3~dB intensity difference is considered as resolution boundary. Tab. 2 depicts the highest resolution at which each method delivers 3dB\geq 3~dB. We notice, that there is a resolution improvement from the non-deconvoluted image IvI_{v} (Reconst), via the original Lucy-Richardson method (Reconst_LR-G) and the one using Xu’s kernel (Reconst_LR-Xu), finally, to the proposed method (Reconst_Xu). Taking the 3dB3~dB limit as boundary, we find horizontal improvement factors (in terms of resolution) of 2.292.29 for the proposed method Reconst_Xu and of 1.891.89 and 1.971.97 for the Lucy-Richardson Reconst_LR-G and the improved Lucy-Richardson Reconst_LR-Xu, respectively. For the vertical resolution, the proposed method has a vertical improvement factor of 2.122.12, while the Lucy-Richardson methods and the improved Lucy-Richardson method have vertical improvement factor 2.072.07, respectively. With respect to the modulation transfer function, Fig. 14b and Fig. 14d show a significantly higher contrast for the proposed method (Reconst_Xu) compared to both, the Lucy-Richardson (Reconst_LR-G) and improved Lucy-Richardson (Reconst_LR-Xu) method.

7.3.2 Embedded Structures

Refer to caption
(a)
Refer to caption
(b)
Figure 15: Comparison between LABEL:sub@fig:intensity_u_pcb PCB region of reference intensity IuI_{u} (in decibel) LABEL:sub@fig:intensity_d_pcb PCB region of the proposed deconvoluted intensity IdI_{d} (in decibel).
Refer to caption
Figure 16: Cross section in PCB region

In this part, we analyzed the PCB region on MetalPCB dataset more closely in order to investigate the effect of our enhanced lateral resolution on the embedded silk structures.

In Fig. 15, the PCB region of the reference intensity IuI_{u} Refer (see Fig. 9a) and the proposed deconvoluted intensity IdI_{d} Reconst_Xu (see Fig. 13f) are shown. We can observe that a periodic intensity pattern is visible by the enhanced lateral resolution that is caused by woven silk material embedded in the PCB region. The silk fibers introduce a second energy reflection to the imaging system. In Fig. 16, the cross section intensity of the PCB regions in decibel scale are shown. Extracting the maximum contrast within the PCB region we find an enhancement from 1.91dB1.91dB to 7.98dB7.98dB. Even though the proposed method nicely emphasizes the periodic silk structure underneath the polymer surface material, we cannot extract the depth of this embedded structure with our approach, as the handling of multi-reflection effects is beyond the scope of this paper. Multi-reflection effects are part of the future work.

8 Conclusion

In this paper, we propose a THz computational image enhancement method to enhance the lateral resolution and depth accuracy beyond the diffraction limit. The method is based on a complex curve fitting on THz 3D image azimuth axis, and a blind deconvolution method on the lateral domain. The experiment results show that our curve fitting method enhances the depth accuracy to 91μm91\mu m. Because of this enhanced depth accuracy, the experiment also shows that the proposed reconstruction method achieves improved intensities on homogeneous, non-planar material regions without introducing additional distortions or noise.

Based on the reconstructed intensities, we apply several lateral blind deconvolution methods. Evidently, all three examined THz deconvolution methods benefit from the enhanced intensity image using our method. In comparison to the classical Lucy-Richardson deconvolution algorithm, the experiments show that the proposed blind deconvolution method achieves the best horizontal resolution enhancement from 794.3μm794.3\mu m to 346.2μm346.2\mu m, yielding an improvement factor of 2.292.29. In terms of intensity contrast, the proposed method clearly outperforms earlier approaches. Moreover, the experiments show that the proposed method is able to emphasize fine silk texture embedded within a polymer material.

Acknowledgement

This research was funded by the German Research Foundation (DFG) as part of the research training group GRK 1564 Imaging New Modalities.

This is a pre-print of an article published in Journal of Infrared, Millimeter, and Terahertz Waves. The final authenticated version is available online at: https://doi.org/10.1007/s10762-019-00609-w

References

  • [1] Hu, B.B., Nuss, M.C.: Imaging with terahertz waves. Optics letters 20(16), 1716–1718 (1995)
  • [2] Siegel, P.H.: Terahertz technology. IEEE Transactions on microwave theory and techniques 50(3), 910–928 (2002)
  • [3] Chan, W.L., Deibel, J., Mittleman, D.M.: Imaging with terahertz radiation. Reports on progress in physics 70(8), 1325 (2007)
  • [4] Jansen, C., Wietzke, S., Peters, O., Scheller, M., Vieweg, N., Salhi, M., Krumbholz, N., Jördens, C., Hochrein, T., Koch, M.: Terahertz imaging: applications and perspectives. Appl. Opt. 49(19), E48–E57 (2010)
  • [5] Cooper, K.B., Dengler, R.J., Llombart, N., Thomas, B., Chattopadhyay, G., Siegel, P.H.: Thz imaging radar for standoff personnel screening. IEEE Transactions on Terahertz Science and Technology 1(1), 169–182 (2011)
  • [6] McClatchey, K., Reiten, M., Cheville, R.: Time resolved synthetic aperture terahertz impulse imaging. Applied physics letters 79(27), 4485–4487 (2001)
  • [7] Ding, J., Kahl, M., Loffeld, O., Haring Bolívar, P.: Thz 3-d image formation using sar techniques: simulation, processing and experimental results. IEEE Transactions on Terahertz Science and Technology 3(5), 606–616 (2013)
  • [8] Kahl, M., Keil, A., Peuser, J., Löffler, T., Pätzold, M., Kolb, A., Sprenger, T., Hils, B., Haring Bolívar, P.: Stand-off real-time synthetic imaging at mm-wave frequencies. In: Passive and Active Millimeter-Wave Imaging XV, vol. 8362, p. 836208 (2012)
  • [9] Ahi, K., Asadizanjani, N., Shahbazmohamadi, S., Tehranipoor, M., Anwar, M.: Terahertz characterization of electronic components and comparison of terahertz imaging with x-ray imaging techniques 9483 (2015)
  • [10] Johnson, J.L., Dorney, T.D., Mittleman, D.M.: Interferometric imaging with terahertz pulses. IEEE Journal of selected topics in quantum electronics 7(4), 592–599 (2001)
  • [11] Chen, H.T., Kersting, R., Cho, G.C.: Terahertz imaging with nanometer resolution. Applied Physics Letters 83(15), 3009–3011 (2003)
  • [12] Xu, L.M., Fan, W.H., Liu, J.: High-resolution reconstruction for terahertz imaging. Applied optics 53(33), 7891–7897 (2014)
  • [13] Li, Y., Li, L., Hellicar, A., Guo, Y.J.: Super-resolution reconstruction of terahertz images. In: SPIE Defense and Security Symposium, pp. 69490J–69490J. International Society for Optics and Photonics (2008)
  • [14] Ding, S.H., Li, Q., Yao, R., Wang, Q.: High-resolution terahertz reflective imaging and image restoration. Applied optics 49(36), 6834–6839 (2010)
  • [15] Hou, L., Lou, X., Yan, Z., Liu, H., Shi, W.: Enhancing terahertz image quality by finite impulse response digital filter. In: Infrared, Millimeter, and Terahertz waves (IRMMW-THz), 2014 39th International Conference on, pp. 1–2. IEEE (2014)
  • [16] Ahi, K., Anwar, M.: Developing terahertz imaging equation and enhancement of the resolution of terahertz images using deconvolution. In: SPIE Commercial+ Scientific Sensing and Imaging, pp. 98560N–98560N. International Society for Optics and Photonics (2016)
  • [17] Walker, G.C., Bowen, J.W., Labaune, J., Jackson, J.B., Hadjiloucas, S., Roberts, J., Mourou, G., Menu, M.: Terahertz deconvolution. Optics express 20(25), 27230–27241 (2012)
  • [18] Chen, Y., Huang, S., Pickwell-MacPherson, E.: Frequency-wavelet domain deconvolution for terahertz reflection imaging and spectroscopy. Optics express 18(2), 1177–1190 (2010)
  • [19] Takayanagi, J., Jinno, H., Ichino, S., Suizu, K., Yamashita, M., Ouchi, T., Kasai, S., Ohtake, H., Uchida, H., Nishizawa, N., et al.: High-resolution time-of-flight terahertz tomography using a femtosecond fiber laser. Optics express 17(9), 7533–7539 (2009)
  • [20] Lucy, L.B.: An iterative technique for the rectification of observed distributions. The astronomical journal 79, 745 (1974)
  • [21] Richardson, W.H.: Bayesian-based iterative method of image restoration. JOSA 62(1), 55–59 (1972)
  • [22] Knobloch, P., Schildknecht, C., Kleine-Ostmann, T., Koch, M., Hoffmann, S., Hofmann, M., Rehberg, E., Sperling, M., Donhuijsen, K., Hein, G., et al.: Medical thz imaging: an investigation of histo-pathological samples. Physics in Medicine & Biology 47(21), 3875 (2002)
  • [23] Xie, Y.Y., Hu, C.H., Shi, B., Man, Q.: An adaptive super-resolution reconstruction for terahertz image based on mrf model. In: Applied Mechanics and Materials, vol. 373, pp. 541–546. Trans Tech Publ (2013)
  • [24] Gu, S., Li, C., Gao, X., Sun, Z., Fang, G.: Three-dimensional image reconstruction of targets under the illumination of terahertz gaussian beam—theory and experiment. IEEE Transactions on Geoscience and Remote Sensing 51(4), 2241–2249 (2013)
  • [25] Dong, J., Wu, X., Locquet, A., Citrin, D.S.: Terahertz superresolution stratigraphic characterization of multilayered structures using sparse deconvolution. IEEE Transactions on Terahertz Science and Technology 7(3), 260–267 (2017)
  • [26] Sun, Z., Li, C., Gu, S., Fang, G.: Fast three-dimensional image reconstruction of targets under the illumination of terahertz gaussian beams with enhanced phase-shift migration to improve computation efficiency. IEEE Transactions on Terahertz Science and Technology 4(4), 479–489 (2014)
  • [27] Liu, W., Li, C., Sun, Z., Zhang, Q., Fang, G.: A fast three-dimensional image reconstruction with large depth of focus under the illumination of terahertz gaussian beams by using wavenumber scaling algorithm. IEEE Transactions on Terahertz Science and Technology 5(6), 967–977 (2015)
  • [28] Standard, M.: Photographic lenses (1959). URL http://www.dtic.mil/dtic/tr/fulltext/u2/a345623.pdf
  • [29] Munson, D.C., Visentin, R.L.: A signal processing view of strip-mapping synthetic aperture radar. IEEE Transactions on Acoustics, Speech, and Signal Processing 37(12), 2131–2147 (1989)
  • [30] Coleman, T.F., Li, Y.: An interior trust region approach for nonlinear minimization subject to bounds. SIAM Journal on optimization 6(2), 418–445 (1996)
  • [31] Xu, L., Jia, J.: Two-phase kernel estimation for robust motion deblurring. In: European conference on computer vision, pp. 157–170. Springer (2010)
  • [32] Xu, L., Zheng, S., Jia, J.: Unnatural l0 sparse representation for natural image deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1107–1114 (2013)
  • [33] Levin, A., Weiss, Y., Durand, F., Freeman, W.T.: Understanding blind deconvolution algorithms. IEEE transactions on pattern analysis and machine intelligence 33(12), 2354–2367 (2011)
  • [34] Perrone, D., Favaro, P.: Total variation blind deconvolution: The devil is in the details. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2909–2916 (2014)
  • [35] MathWorks Inc.: Using colormaps (2016). URL https://www.mathworks.com/help/matlab/examples/using-colormaps.html
  • [36] Hunsche, S., Koch, M., Brener, I., Nuss, M.: Thz near-field imaging. Optics communications 150(1), 22–26 (1998)
  • [37] Forshaw, M., Haskell, A., Miller, P., Stanley, D., Townshend, J.: Spatial resolution of remotely sensed imagery a review paper. International Journal of Remote Sensing 4(3), 497–520 (1983)
  • [38] Boreman, G.D.: Modulation transfer function in optical and electro-optical systems, vol. 21. SPIE press Bellingham, WA (2001)
  • [39] Smith, W.J.: Modern optical engineering. Tata McGraw-Hill Education (1966)