This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Haptics-Enabled Forceps with Multi-Modal Force Sensing: Towards Task-Autonomous Surgery

Tangyou Liu, Student Member, IEEE, Tinghua Zhang, Student Member, IEEE, Jay Katupitiya,
Jiaole Wang, Member, IEEE, and Liao Wu, Member, IEEE
Research was partly supported by Australian Research Council under Grant DP210100879, Heart Foundation under Vanguard Grant 106988, and UNSW Engineering GROW Grant PS69063 awarded to Liao Wu, partly by the Science and Technology Innovation Committee of Shenzhen under Grant JCYJ20220818102408018 awarded to Jiaole Wang, and partly by the Tyree IHealthE PhD Top-Up Scholarship awarded to Tangyou Liu. Corresponding authors: Jiaole Wang (wangjiaole@hit.edu.cn) and Liao Wu (liao.wu@unsw.edu.au).Tangyou Liu, Jay Katupitiya, and Liao Wu are with the School of Mechanical & Manufacturing Engineering, The University of New South Wales, Sydney, NSW 2052, Australia.Tinghua Zhang and Jiaole Wang are with the School of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen, 518055, China.
Abstract

Many robotic surgical systems have been developed with micro-sized forceps for tissue manipulation. However, these systems often lack force sensing at the tool side and the manipulation forces are roughly estimated and controlled relying on the surgeon’s visual perception. To address this challenge, we present a vision-based module to enable the micro-sized forceps’ multi-modal force sensing. A miniature sensing module adaptive to common micro-sized forceps is proposed, consisting of a flexure, a camera, and a customised target. The deformation of the flexure is obtained by the camera estimating the pose variation of the top-mounted target. Then, the external force applied to the sensing module is calculated using the flexure’s displacement and stiffness matrix. Integrating the sensing module into the forceps, in conjunction with a single-axial force sensor at the proximal end, we equip the forceps with haptic sensing capabilities. Mathematical equations are derived to estimate the multi-modal force sensing of the haptics-enabled forceps, including pushing/pulling forces (Mode-I) and grasping forces (Mode-II). A series of experiments on phantoms and ex vivo tissues are conducted to verify the feasibility of the proposed design and method. Results indicate that the haptics-enabled forceps can achieve multi-modal force estimation effectively and potentially realize autonomous robotic tissue grasping procedures with controlled forces. A video demonstrating the experiments can be found at https://youtu.be/pi9bqSkwCFQ.

Index Terms:
Surgical robotics, autonomous surgery, multi-modal force sensing, tissue manipulation.

I Introduction

An increasing number of robotic systems have been developed with miniature instruments to facilitate minimally invasive surgery (MIS) in the past decade [1, 2]. For example, micro-sized forceps have been widely adopted in recently developed robotic systems for tissue manipulation in narrow spaces [3, 4, 5]. However, one notable limitation of these systems is their lack of force sensing at the tool side for tissue manipulation, including pushing (Fig. 1(a)), grasping (Fig. 1(b)), and pulling (Fig. 1(c)) forces.

Force sensing for tissue manipulation is critically needed for two reasons. Firstly, information about these forces can enhance surgeons’ operation and decision-making if integrated into the systems appropriately [6]. For instance, Talasaz et al. [7] verified that a haptics-enabled teleoperation system could perform better during a robot-assisted suturing task. Secondly, there is a consensus on the growth of autonomy in surgical robots [8] accompanied by more dexterous mechanisms such as snake-like robots [9, 10]. Force sensing at the tool side will play an essential role in the evolution of autonomous robotic surgery [8, 11].

Refer to caption
Figure 1: Typical tissue manipulations using a pair of forceps in robotic thyroid tumor treatment, an example scenario of robotic surgery, and the critically required force information. (a) A pair of forceps touches the targeted tissue while measuring the pushing forces. (b) The forceps grasp the targeted tissue while measuring and controlling the grasping force. (c) The forceps pull the grasped tissue while measuring and controlling the pulling force. (d) The micro-level actuator used for driving the forceps, where a commercial single-axis force sensor measures the driving force applied to the forceps’ driving cable. The upper and lower linear drivers are responsible for grasping and pushing/pulling, respectively. The lower driver also compensates for the motion introduced by the flexure’s deformation when the upper driver grasps tissue. (e) The haptics-enabled biopsy forceps, where the forceps’ base is concentrically installed to the vison-based sensing module’s head (2). Two LEDs (1) are mounted in the sensing module’s head (2) to provide light source in our prototype, which comes through the target’s (3) holes and is captured by the camera (5) mounted on the other end of the flexure (4). The upper-right inset shows the installed target and the holes used as markers. The cylindrical base (6) provides a mount to connect to instruments, which is a stainless tube (8) in our prototype. Concentric to the module, a channel is reserved for the passage of the forceps’ driving cable (7). The lower-right inset shows a prototype with a 4mm diameter and 22mm length. Plot (e) also shows the camera frame {CC}, the target frames of the initial {M0M_{0}} and current {MtM_{t}} states, and the forces applied to the forceps when they push or pull the tissue. 𝐟d\mathbf{f}_{d} is the driving force on the cable and is measured by the proximal commercial force sensor. 𝐟d\mathbf{f}^{\prime}_{d} is the driving force transmitted to the forceps’ jaws through the central cable. 𝐟s\mathbf{f}_{s} is the supporting force from the sensing module, and 𝐟p\mathbf{f}_{p} is the pushing/pulling force from the tissue.

Researchers have conducted various investigations to enable force sensing of different grippers for tissue manipulation. Zarrin et al. [12] fabricated a two-dimensional gripper that can measure grasping and axial forces with two Fiber Bragg grating (FBG) sensors embedded into the gripper’s jaws. Using the same principle, Lai et al. [13] developed a gripper that can estimate pulling and lateral forces. Although these FBG-based modules usually present a high resolution of force detection, they often suffer from sensitivity to temperature variation, and current compensation methods require complex design [14, 15, 16]. In addition, highly expensive interrogation systems are needed for these sensors to work. By integrating capacitance into a pair of surgical forceps’ two jaws, Kim et al. [17] enabled the forceps with multi-axis force sensing. However, capacitance-based methods are often susceptible to temperature and humidity variations, and additional sensing modules or complex design and fabrication are required for compensation [18, 19]. Moreover, sterilizations are necessary for surgical instruments, such as autoclaving and dry heat procedures, which may harm the sensing capabilities of the tool-side electronic modules [20]. These limitations may invalidate the forceps’ force sensing capability when used in practical surgical tasks. In addition, all the above methods require re-machining the forceps’ two jaws to install the proposed sensors, making them unsuitable for micro-sized forceps.

Recently, vision-based methods have also been explored for developing low-cost force sensors. Ouyang et al. [21] developed a multi-axis force sensor with a camera tracking a fiducial marker supported by four compression springs. The displacement of the marker was captured by the camera and converted to force information by multiplying a linear transformation matrix. Fernandez et al. [22] presented a vision-based force sensor for a humanoid robot’s fingers with almost the same principle. In addition to providing multi-dimensional force feedback, vision-based force sensing is relatively robust to magnetic, electrical, and temperature variations. However, these vision-based sensors developed so far are large in size and can only estimate contact forces.

Although vision has been adopted for force estimation in surgery, current methods often rely on tissue/organ deformation, movement, and biomechanical properties, and cannot be used as a stand-alone sensing module [23, 24, 25].

In summary, despite many achievements for gripper force sensing based on various principles in the past few years, force estimation of the micro-sized forceps for MIS, especially their grasping and pulling force measurement, is still challenging. This paper proposes a method to enable multi-modal force sensing of the micro-sized forceps by combining a three-dimensional vision-based force sensing module at the tool side and a single-axis commercial strain gauge sensor at the proximal side, as shown in Fig. 1(d). The developed sensing module that can be easily integrated at the tool side is responsible for perceiving the interaction between the forceps and the manipulated tissues (Fig. 1(e)). The commercial strain gauge sensor mounted at the proximal side measures the force applied to the forceps’ driving cable. By integrating these pieces of sensed information, mathematical equations are derived to estimate the forceps’ touching, grasping, and pulling forces.

The main contributions of this article are two-fold:

  1. 1.

    A vision-based force-sensing module that can be easily integrated into micro-sized forceps is developed. An algorithm is designed to calculate the force applied to the sensing module with a registration method for tracking and estimating the pose of the sensing module’s target.

  2. 2.

    A design of haptics-enabled forceps is further proposed by assembling the widely used biopsy forceps with the developed sensing module embedded at the tool end and a single-axis strain gauge sensor mounted at the proximal end. Mathematical formulae are derived to estimate the multi-modal force sensing of the haptics-enabled forceps, including pushing/pulling force (Mode-I) and grasping force (Mode-II).

These contributions are validated by various carefully designed experiments on phantoms and ex vivo tissues. Results indicate that the haptics-enabled forceps can achieve multi-modal force estimation effectively and potentially realize autonomous robotic tissue grasping procedures with controlled forces.

II Haptics-Enabled Micro-Sized Forceps

II-A Design of A Vision-Based Force Sensing Module

II-A1 Overall Structure

The design of the sensing module (Fig. 1(e)) has followed two requirements: 1) small in size, 2) adaptive to micro-sized forceps, for example, the biopsy forceps. The module is designed in a cylindrical shape to minimize anisotropy, and the main components are a flexure, a camera, and a target. A central channel is reserved for the passage of instruments. The target, supported by the flexure, is manufactured with multiple holes that allow light to pass from a light source, and these holes can then be captured by a camera as markers for the estimation of the target’s pose, as depicted in Fig. 1(e). The module’s size and stiffness can be customized based on the instruments’ and applications’ needs. Here, we assemble a prototype with a diameter of 4mm and a length of 12mm and integrate it in a pair of biopsy forceps, as shown in the lower-right of Fig. 1(e).

II-A2 Flexure

Material properties and geometry play a dominant role in the multidimensional stiffness of the flexure. Different approaches have been adopted for flexure design, including the pseudo-rigid-body replacement method [26], topology optimization [27], etc. For force sensing in different surgical tissue manipulations, the flexure should also be changeable according to the task requirements. One commercial and readily available flexure is the compression spring [21, 22]. Although compression springs typically show dominant stiffness in the axial direction, they also exhibit a certain degree of stiffness in lateral directions and can be linearly approximated [28]. Moreover, they can be easily reconfigured by choosing different wire diameters, outer diameters, and coil numbers. For the prototype, we chose a stainless steel compression spring with a wire diameter of 0.5mm, outer diameter of 4mm, rest length of 5mm, and four coils. The selected spring has a force response range of 0\sim5N and 0\sim2N at axial and lateral displacements of 0\sim2mm and 0\sim1mm, respectively, which can meet the requirement of most tissue manipulations [29].

II-A3 Camera and Target

The camera for the sensing module should have a compact size, sufficient resolution, and surgery compatibility. For the prototype, we chose OVM6946 (Omnivision Inc., USA), a medical wafer-level endoscopy camera, which has been adopted in many surgical applications [30, 5]. It is 1mm in width and 2.27mm in length and has a 400×\times400 resolution and 30Hz rate. To ensure continuous tracking and estimation, we fabricated the target with 12 cycle-distributed 0.15mm-diameter holes used as markers for the camera to track and estimate the target pose, and a central hole is reserved for tools to pass through. These holes can also be customized based on the instruments and applications. Considering the integration into the biopsy forceps for MIS, the prototype’s target in Fig. 1(e) has a 3.4mm outer diameter and a 0.6mm-diameter central hole for the driving cable.

II-B Force Estimation of Sensing Module

Ideally, for a linear-helix compression spring, the external force 𝐟s\mathbf{f}_{s} applied to the sensing module’s head has an approximated relationship with the spring’s displacement [28] as

𝐟s=𝐊s𝐝s=[kx000ky000kz]𝐝s,{\mathbf{f}_{s}}={{\mathbf{K}}_{s}}\,{\mathbf{d}_{s}}=\left[{\begin{array}[]{*{20}{c}}{{k_{x}}}&0&0\\ 0&{{k_{y}}}&0\\ 0&0&{{k_{z}}}\end{array}}\right]\mathbf{d}_{s}\,, (1)

where 𝐟s3\mathbf{f}_{s}\in\mathbb{R}^{3} denotes the external forces, 𝐊s{\mathbf{K}}_{s} denotes the stiffness matrix of the spring, and 𝐝s3\mathbf{d}_{s}\in\mathbb{R}^{3} is the displacement of the spring’s end relative to its relaxed original position. In this paper, 𝐝s\mathbf{d}_{s} is equal to the target’s current relative displacement to its initial state.

II-B1 Image-Based Target Pose Estimation

In order to estimate the target pose, each marker needs to be tracked by the camera first. However, as the raw image captured by the camera is full of noise, as seen in Fig. 2(b), markers cannot be detected robustly. Therefore, a bilateral filter and threshold are adopted to minimize the noise and preserve the boundary in images [31]. Then, markers are circled by a parameterized blob detection [32], as shown in Fig. 2(c). Finally, each marker’s central pixel position 𝐩I{}^{I}{\mathbf{p}} is returned.

The positions of markers 𝐏M2×12{}^{M}{\mathbf{P}}\in\mathbb{R}^{2\times 12} in the target frame {M} are known. The projection between the ii-th marker centre 𝐩iM=[xi,yi]T{}^{M}{\mathbf{p}}_{i}=[x_{i},\,y_{i}]^{\textnormal{T}} and its corresponding pixel 𝐩iI=[ui,vi]T{}^{I}{\mathbf{p}}_{i}=[u_{i},\,v_{i}]^{\textnormal{T}} in the image frame {I}, as shown in Fig. 2(a), can be formulated as

𝐩~i𝐼{}^{\mathop{I}}{\tilde{\mathbf{p}}}_{i} =1z[1ρw0u001ρhv0001][f0000f000010]𝐓MMC𝐩~i\displaystyle=\frac{1}{z}\,\left[{\begin{array}[]{*{20}{c}}{\frac{1}{{{\rho_{w}}}}}&0&{{u_{0}}}\\ 0&{\frac{1}{{{\rho_{h}}}}}&{{v_{0}}}\\ 0&0&1\end{array}}\right]\left[{\begin{array}[]{*{20}{c}}f&0&0&0\\ 0&f&0&0\\ 0&0&1&0\end{array}}\right]{}_{M}^{C}{\mathbf{T}}{\,}^{M}{\tilde{\mathbf{p}}_{i}} (8)
=1z[fρw0u00fρhv0001]cameraintrinsic𝐀[r11r12txr21r22tyr31r22tz][xiyi1]\displaystyle=\frac{1}{z}\,\underbrace{\left[{\begin{array}[]{*{20}{c}}{\frac{f}{{{\rho_{w}}}}}&0&{{u_{0}}}\\ 0&{\frac{f}{{{\rho_{h}}}}}&{{v_{0}}}\\ 0&0&1\end{array}}\right]}_{camera\,\,intrinsic\,\,{\mathbf{A}}}\left[{\begin{array}[]{*{20}{c}}r_{11}&r_{12}&t_{x}\\ r_{21}&r_{22}&t_{y}\\ r_{31}&r_{22}&t_{z}\end{array}}\right]\left[{\begin{array}[]{*{20}{c}}x_{i}\\ y_{i}\\ 1\end{array}}\right] (18)
=tzz[h11h12h13h21h22h23h31h321]homography𝐇[xiyi1],\displaystyle=\frac{t_{z}}{z}\,\underbrace{\left[{\begin{array}[]{*{20}{c}}{h_{11}}&{h_{12}}&{h_{13}}\\ {h_{21}}&{h_{22}}&{h_{23}}\\ {h_{31}}&{h_{32}}&1\end{array}}\right]}_{homography\,\,\mathbf{H}}\left[{\begin{array}[]{*{20}{c}}x_{i}\\ y_{i}\\ 1\end{array}}\right]\,, (25)

where 𝐓MC=[𝐫1𝐫2𝐫3𝐭0001]=[r11r12r13txr21r22r23tyr31r32r33tz0001]{}_{M}^{C}{\mathbf{T}}=\left[\begin{array}[]{*{20}{c}}\mathbf{r}_{1}&\mathbf{r}_{2}&\mathbf{r}_{3}&\mathbf{t}\\ 0&0&0&1\end{array}\right]=\left[{\begin{array}[]{*{20}{c}}r_{11}&r_{12}&r_{13}&t_{x}\\ r_{21}&r_{22}&r_{23}&t_{y}\\ r_{31}&r_{32}&r_{33}&t_{z}\\ 0&0&0&1\end{array}}\right] is the transformation from camera frame {C} to marker frame {M}, 𝐩~iM=[xi,yi, 0, 1]T{}^{M}\tilde{\mathbf{p}}_{i}=[x_{i},\,y_{i},\,0,\,1]^{\textnormal{T}} is the homogeneous form of 𝐩iM{}^{M}{\mathbf{p}}_{i}, 𝐩~iI=[ui,vi, 1]T{}^{I}\tilde{\mathbf{p}}_{i}=[u_{i},\,v_{i},\,1]^{\textnormal{T}} is the homogeneous form of 𝐩iI{}^{I}{\mathbf{p}}_{i}, i[0,n]i\in[0,n] (n=11n=11 for the prototype), ff is the camera’s focal length, ρw\rho_{w} and ρh\rho_{h} are the width and height of each pixel, respectively, z=(h31xi+h32yi+1)tzz={(h_{31}x_{i}+h_{32}y_{i}+1)}\,t_{z}, and the camera intrinsic matrix 𝐀\mathbf{A} is obtained via calibration with MATLAB camera toolbox [33].

According to the transformation relationship shown in Fig. 2(a), the current target pose related to its initial state 𝐓MtM0{}^{M_{0}}_{M_{t}}\mathbf{T} can be formulated as

𝐓MtM0=(𝐓M0C)1MtC𝐓.{}^{M_{0}}_{M_{t}}\mathbf{T}=({}^{C}_{M_{0}}\mathbf{T})^{-1}{\,\,}^{C}_{M_{t}}\mathbf{T}\,. (26)

The translation of 𝐓MtM0{}^{M_{0}}_{M_{t}}\mathbf{T} is equal to the displacement 𝐝s\mathbf{d}_{s} in (1). Then, the question becomes to get 𝐓M0C{}^{C}_{M_{0}}\mathbf{T} and 𝐓MtC{}^{C}_{M_{t}}\mathbf{T}. The relationship between 𝐓MC{}^{C}_{M}\mathbf{T} and homography matrix 𝐇\mathbf{H} can be formulated as

{𝐫1=tz𝐀1[h11,h21,h31]T𝐫2=tz𝐀1[h12,h22,h32]T𝐭=tz𝐀1[h13,h23,h33]T𝐫3=𝐫1×𝐫2.\displaystyle\left\{\begin{array}[]{l}\mathbf{r}_{1}={t_{z}}\mathbf{A}^{-1}[h_{11},h_{21},h_{31}]^{\rm T}\\ \mathbf{r}_{2}={t_{z}}\mathbf{A}^{-1}[h_{12},h_{22},h_{32}]^{\rm T}\\ \mathbf{t}={t_{z}}\mathbf{A}^{-1}[h_{13},h_{23},h_{33}]^{\rm T}\\ \mathbf{r}_{3}=\mathbf{r}_{1}\times\mathbf{r}_{2}\end{array}\right.\,. (31)

Because 𝐫1\mathbf{r}_{1} is a unit vector, tzt_{z} can be calculated by tz=1𝐀1[h11,h21,h31]Tt_{z}=\frac{1}{||\mathbf{A}^{-1}[h_{11},h_{21},h_{31}]^{\rm T}||}. Therefore, to estimate 𝐓M0C{}^{C}_{M_{0}}\mathbf{T} and 𝐓MtC{}^{C}_{M_{t}}\mathbf{T}, we only need to solve the homography matrix 𝐇0\mathbf{H}^{0} and 𝐇t\mathbf{H}^{t} of the initial and current states.

Because each homography matrix 𝐇\mathbf{H} has eight variables, at least four independent projections between 𝐩iM{}^{M}{\mathbf{p}}_{i} and 𝐩iI{}^{I}{\mathbf{p}}_{i} are needed to solve it, and each pair’s relationship can be formulated as

[pi,xM0pi,yM0100pi,xM0pi,yM01pi,xMpi,uIpi,xMpi,vIpi,yMpi,uIpi,yMpi,vI]T[h11h12h13h21h22h23h31h32]=[pi,uIpi,vI],{\left[{\begin{array}[]{*{20}{c}}{{}^{M}{p_{i,x}}}&0\\ {{}^{M}{p_{i,y}}}&0\\ 1&0\\ 0&{{}^{M}{p_{i,x}}}\\ 0&{{}^{M}{p_{i,y}}}\\ 0&1\\ {-{}^{M}{p_{i,x}}{}^{I}{p_{i,u}}}&{-{}^{M}{p_{i,x}}{}^{I}{p_{i,v}}}\\ {-{}^{M}{p_{i,y}}{}^{I}{p_{i,u}}}&{-{}^{M}{p_{i,y}}{}^{I}{p_{i,v}}}\end{array}}\right]^{\rm T}}\,\left[{\begin{array}[]{*{20}{c}}{{h_{11}}}\\ {{h_{12}}}\\ {{h_{13}}}\\ {{h_{21}}}\\ {{h_{22}}}\\ {{h_{23}}}\\ {{h_{31}}}\\ {{h_{32}}}\end{array}}\right]=\left[{\begin{array}[]{*{20}{c}}{{}^{I}{p_{i,u}}}\\ {{}^{I}{p_{i,v}}}\end{array}}\right]\,, (32)

where (pi,xM,pi,yM)({}^{M}{p_{i,x}},{}^{M}{p_{i,y}}) and (pi,uI,pi,vI)({}^{I}{p_{i,u}},{}^{I}{p_{i,v}}) are coordinates of the ii-th marker in the frame {M} and its projection in the frame {I}. Based on this relationship and four pairs of projections, a set of equations can be established to solve 𝐇\mathbf{H}. Substituting the solved 𝐇\mathbf{H} into (31), 𝐓MC{}^{C}_{M}\mathbf{T} can be further obtained.

Refer to caption
Figure 2: (a) The projection relationship between the ii-th marker (𝐩iM{}^{M}{\mathbf{p}}_{i}) in the target frame {M} and its corresponding pixel position (𝐩iI{}^{I}{\mathbf{p}}_{i}) in image frame {I}. (a) also shows the original index of markers. (b) The directly captured image at the initial state. (c) The blob detected result at the initial state based on the filtered image. These detected marker centers are returned for pose estimation. (d) The blob detection result at a random pose, where the indexes of detected markers differ, and the points labeled with 3, 6, and 11 in the initial state are lost. (e) The registration result of (e), where markers were registered to their original indexes.

II-B2 Marker Robust Registration

Ideally, through substituting four detected 𝐩iI{}^{I}{\mathbf{p}}_{i} and their corresponding 𝐩iM{}^{M}{\mathbf{p}}_{i} into (32) we can calculate the homography matrix 𝐇\mathbf{H} for each state. However, under external forces, the pose variation of the target causes the occlusion or halation of some markers and results in wrong correspondence between the image and the marker. As a result, the corresponding relationship between 𝐩iI{}^{I}{\mathbf{p}}_{i} and 𝐩iM{}^{M}{\mathbf{p}}_{i} changes over time, which invalidates the direct use of (32). Therefore, a registration process is required to label the currently detected markers’ pixel positions 𝐏tI{}^{I}{\mathbf{P}}^{t} to their original indexes, as shown in Fig. 2(a), and we define this registered result as 𝐏t,rI{}^{I}{\mathbf{P}}^{t,r}.

At the initial state t=0t=0, the target is installed perpendicularly to the optical axis. For the prototype we built, all the markers can be detected except the 11th marker (top marker) that is occluded by the channel, as shown in Fig. 2(b). Therefore, we define 𝐏0M{}^{M}{\mathbf{P}^{0}} by removing the 11th marker from 𝐏M{}^{M}{\mathbf{P}} as the marker set at the initial state, and the projection 𝐇0\mathbf{H}^{0} between 𝐏0M{}^{M}{\mathbf{P}^{0}} and 𝐏0I{}^{I}{\mathbf{P}}^{0} can be obtained according to (32). Substituting 𝐩11M{}^{M}{\mathbf{p}}_{11} and 𝐇0\mathbf{H}^{0} back to (32), we can estimate 𝐩110I{}^{I}{\mathbf{p}}^{0}_{11}. Then, the complete pixel-position array of the initial state 𝐏0,cI{}^{I}{\mathbf{P}}^{0,c} can be obtained by appending 𝐩110I{}^{I}{\mathbf{p}}^{0}_{11} to 𝐏0I{}^{I}{\mathbf{P}}^{0}.

For the current state tt, the pixel positions of the detected markers 𝐏tI2×m{}^{I}{\mathbf{P}}^{t}\in\mathbb{R}^{2\times m} (mm is the number of detected markers, mnm\leqslant n) are returned by the blob detection. Nevertheless, these detected markers are directly labelled from bottom to top in the image. For example, the blob detection result of a random pose is shown in Fig. 2(d). Assuming 𝐏t1,cI{}^{I}{\mathbf{P}}^{t-1,c} is known, we calculate the L1 norm of the distance vector between each 𝐩itI{}^{I}{\mathbf{p}}^{t}_{i} and 𝐩jt1,cI{}^{I}{\mathbf{p}}^{t-1,c}_{j}, i[1,m]i\in[1,m] and j[1,n]j\in[1,n], and define it as ee. The corresponding marker of 𝐩itI{}^{I}{\mathbf{p}}^{t}_{i} in 𝐏t1,cI{}^{I}{\mathbf{P}}^{t-1,c} is the one that results in the minimal ee, and its index is the desired jj^{*}. Without loss of generality, the registration problem for the detected i-th hole is defined as

j=argminj||I𝐩itI𝐩jt1,c||1.j^{*}=\arg\min_{j}||^{I}{\mathbf{p}}^{t}_{i}-\,^{I}{\mathbf{p}}^{t-1,c}_{j}||_{1}\,. (33)

Repeating this registration for each detected marker 𝐩itI{}^{I}{\mathbf{p}}^{t}_{i} and updating 𝐩jt,rI𝐩itI{}^{I}{\mathbf{p}}^{t,r}_{j^{*}}\leftarrow{}^{I}{\mathbf{p}}^{t}_{i}, we can obtain 𝐏t,rI{}^{I}{\mathbf{P}}^{t,r} with the original indexes. We define the corresponding marker position set of 𝐏t,rI{}^{I}{\mathbf{P}}^{t,r} as 𝐏tM{}^{M}{\mathbf{P}}^{t}. Substituting 𝐏t,rI{}^{I}{\mathbf{P}}^{t,r} and 𝐏tM{}^{M}{\mathbf{P}}^{t} to (32), the homography matrix of the current state 𝐇t\mathbf{H}^{t} can be calculated.

To ensure the image of the current state can be used as the reference for the next registration, i.e., the tt+1 state, the complete pixel-position set 𝐏t,cI{}^{I}{\mathbf{P}}^{t,c} of the current state also should be generated. By substituting undetected markers’ coordinates in 𝐏M{}^{M}{\mathbf{P}} and 𝐇t\mathbf{H}^{t} to (32), we can estimate their corresponding pixel positions 𝐏t,eI{}^{I}{\mathbf{P}}^{t,e}. Then, 𝐏t,cI{}^{I}{\mathbf{P}}^{t,c} can be obtained by merging 𝐏t,eI{}^{I}{\mathbf{P}}^{t,e} with 𝐏t,rI{}^{I}{\mathbf{P}}^{t,r}. For example, Fig. 2(e) shows the registration result and the generated complete image of Fig. 2(d).

II-B3 Flexure Deformation Estimation

The transformation between the target and the camera at the initial state 𝐓M0C{}^{C}_{M_{0}}\mathbf{T} can be calculated by substituting 𝐏0I{}^{I}{\mathbf{P}}^{0} and 𝐇0\mathbf{H}^{0} into (31). Similarly, the transformation at the current state 𝐓MtC{}^{C}_{M_{t}}\mathbf{T} can be obtained by substituting 𝐏t,rI{}^{I}{\mathbf{P}}^{t,r} and 𝐇t\mathbf{H}^{t} into (31). Then, the transformation 𝐓MtM0{}^{M_{0}}_{M_{t}}\mathbf{T} can be calculated by substituting 𝐓M0C{}^{C}_{M_{0}}\mathbf{T} and 𝐓MtC{}^{C}_{M_{t}}\mathbf{T} into (26), which reflects the deformation of the flexure.

II-B4 Force Calculation

The displacement of the spring 𝐝s\mathbf{d}_{s} is the translation vector of 𝐓MtM0{}^{M_{0}}_{M_{t}}\mathbf{T}. Substituting 𝐝s\mathbf{d}_{s} and the spring’s stiffness matrix 𝐊s\mathbf{K}_{s} into (1), we can estimate the external force 𝐟s\mathbf{f}_{s}. Ideally, the stiffness matrix 𝐊s\mathbf{K}_{s} represents the spring’s characteristics that connect the camera and the target. However, this stiffness matrix 𝐊s\mathbf{K}_{s} should also consider the influence of wires and shelter, as they are parallel to the spring. In order to get the practical stiffness matrix 𝐊s\mathbf{K}_{s}, a calibration is carried out on the prototype and detailed in section III. In summary, the above pose and force estimation method can be summarized as algorithm 1.

1:𝐏M{}^{M}{\mathbf{P}}, 𝐀\mathbf{A}, 𝐊s\mathbf{K}_{s} and frame
2:𝐓M0Mt{}^{M_{t}}_{M_{0}}\mathbf{T}, 𝐟s\mathbf{f}_{s}
3:set t=0t=0;
4:get 𝐏tI{}^{I}{\mathbf{P}}^{t} via blob detection based on frame;
5:get 𝐏tM{}^{M}{\mathbf{P}}^{t} by removing 𝐩n1M{}^{M}{\mathbf{p}_{n-1}} from 𝐏M{}^{M}{\mathbf{P}};
6:calc 𝐇t\mathbf{H}^{t} by substituting 𝐏tI{}^{I}{\mathbf{P}}^{t} and 𝐏tM{}^{M}{\mathbf{P}}^{t} to (32);
7:estimate 𝐏n10I{}^{I}{\mathbf{P}}^{0}_{n-1} by substituting 𝐩n1M{}^{M}{\mathbf{p}}_{n-1} and 𝐇0\mathbf{H}^{0} to (32);
8:get 𝐏t,cI{}^{I}{\mathbf{P}}^{t,c} by appending 𝐏n1tI{}^{I}{\mathbf{P}}^{t}_{n-1} to 𝐏tI{}^{I}{\mathbf{P}}^{t};
9:calc 𝐓M0C{}^{C}_{M_{0}}\mathbf{T} by substituting 𝐇t\mathbf{H}^{t} to (31);
10:while frame do
11:     tt++;
12:     get 𝐏tI{}^{I}{\mathbf{P}}^{t} via blob detection based on frame;
13:     for i=0;i<col(I𝐏t);i++i=0;i<col(^{I}{\mathbf{P}}^{t});i++ do;
14:         ee^{\prime} = infinity; jj^{*} = infinity;
15:         for j=0;j<col(I𝐏t1,c);j++j=0;j<col(^{I}{\mathbf{P}}^{t-1,c});j++ do;
16:              e=||I𝐩itI𝐩jt1,c||1e=||^{I}{\mathbf{p}}^{t}_{i}-{\,}^{I}{\mathbf{p}}^{t-1,c}_{j}||_{1};
17:              if e<ee<e^{\prime} then
18:                  jj^{*} = j; ee^{\prime} = ee;
19:              else
20:                  ee^{\prime} = ee;
21:              end if
22:         end for
23:         if jcol(I𝐏t1,c)j^{*}\leq col(^{I}{\mathbf{P}}^{t-1,c}) then
24:              𝐩jt,rII𝐩it{}^{I}{\mathbf{p}}^{t,r}_{j^{*}}\leftarrow{\,}^{I}{\mathbf{p}}^{t}_{i};
25:         end if
26:     end for
27:     get corresponding 𝐏tM{}^{M}{\mathbf{P}^{t}} of 𝐏t,rI{}^{I}{\mathbf{P}}^{t,r} from 𝐏M{}^{M}{\mathbf{P}};
28:     calc 𝐇t\mathbf{H}^{t} by substituting 𝐏t,rI{}^{I}{\mathbf{P}}^{t,r} and 𝐏tM{}^{M}{\mathbf{P}^{t}} to (32);
29:     get 𝐏t,eI{}^{I}{\mathbf{P}}^{t,e} by substituting 𝐏M{}^{M}{\mathbf{P}} and 𝐇t\mathbf{H}^{t} to (25);
30:     generate 𝐏t,cI{}^{I}{\mathbf{P}}^{t,c} by merging 𝐏t,rI{}^{I}{\mathbf{P}}^{t,r} with 𝐏t,eI{}^{I}{\mathbf{P}}^{t,e};
31:     calc 𝐓MtC{}^{C}_{M_{t}}\mathbf{T} by substituting 𝐇t\mathbf{H}^{t} to (31);
32:     calc 𝐓MtM0{}^{M_{0}}_{M_{t}}\mathbf{T} by substituting 𝐓M0C{}^{C}_{M_{0}}\mathbf{T} and 𝐓MtC{}^{C}_{M_{t}}\mathbf{T} to (26);
33:     calc 𝐟s\mathbf{f}_{s} by substituting 𝐓MtM0{}^{M_{0}}_{M_{t}}\mathbf{T} and 𝐊s\mathbf{K}_{s} to (1);
34:end while
Algorithm 1 Pose and Force estimation of sensing module

II-C Haptics-Enabled Forceps and Force Estimation

II-C1 Micro-level Forceps Actuator

To drive the forceps and conduct experimental verification, we designed a micro-level actuator, as shown in Fig. 1(d), which consists of upper and lower linear drivers that are responsible for grasping and pushing/pulling, respectively. The lower driver also compensates for the motion introduced by the flexure’s deformation when the upper driver actuates the forceps to grasp tissue. The compensation is achieved by moving the lower driver’s carrier for the same amount of the flexure’s axial deformation estimated by the camera in the reverse direction. A commercial single-axis force sensor (ZNLBS-5kg, CHINO, China), with a resolution of 0.015N, is installed collinearly to the forceps and connected to the driving cable to measure the driving force.

II-C2 Pushing/Pulling Force Estimation (Mode-I)

Integrating the sensing module into the forceps, we can enable its haptic sensing, and the haptics-enabled forceps are shown in Fig. 1(e). The base of the forceps is installed concentrically to the sensing module’s head, and the driving cable passes through the reserved central channel. The cylindrical base connects the forceps to its carrying instrument (stainless tube). A prototype is also shown in the inset of Fig. 1(e), which has a 4mm diameter and 22mm total length and is used in the following description and experiments. The proposed haptics-enabled forceps can be easily integrated into a robotic surgical system with the micro-level actuator, such as the robot presented in Fig. 1(a). The forces applied to the forceps when they push or pull a tissue are also shown in Fig. 1(e). 𝐟p\mathbf{f}_{p} is the pushing/pulling force from the interactive tissue. 𝐟s\mathbf{f}_{s} is the elastic force from the spring, which can be directly estimated by (1). 𝐟d\mathbf{f}^{\prime}_{d} is the driving force of the cable at the tool end. Ignoring friction, we assume |𝐟d||\mathbf{f}^{\prime}_{d}| equals |𝐟d||\mathbf{f}_{d}| measured by the single-axis force sensor connected to the driving cable at the proximal side. Then, 𝐟d\mathbf{f}^{\prime}_{d} can be formulated as 𝐟d=𝐓MtM0𝐟d\mathbf{f}^{\prime}_{d}={}^{M_{0}}_{M_{t}}{}\mathbf{T}\,{}\mathbf{f}_{d}. Finally, we can obtain the pushing/pulling force 𝐟p\mathbf{f}_{p} as

𝐟p=𝐓MtM0𝐟d𝐟s.\mathbf{f}_{p}=-{}^{M_{0}}_{M_{t}}{}\mathbf{T}\,{}\mathbf{f}_{d}\,-\mathbf{f}_{s}\,. (34)

For tissue touching, the contact force can also be estimated by (34) no matter if the forceps are closed or open.

II-C3 Grasping Force Estimation (Mode-II)

The estimation of the grasping force is more challenging because it relates to the geometry and driving force applied to the forceps’ cable, as depicted in Fig. 3. Although the sensing module can estimate the 3D elastic force 𝐟s\mathbf{f}_{s} and the orientation of 𝐟d\mathbf{f}^{\prime}_{d}, we mainly consider the situation where the forceps grasp and pull tissue straightly as grasping is performed in the release condition.

Refer to caption
Figure 3: (a) The sectional view of the haptics-enabled forceps when the two jaws are closed. (b) shows a state when the two jaws are open at θ\theta. tdt_{d} is the movement of the driving cable, while tst_{s} is the transformation of the sensing module’s head estimated by the camera. (c) The forces applied to the forceps when they grasp and pull a tissue include driving force FdF_{d} from the cable, supporting force FsF_{s} from the sensing module, contact force FgF_{g} (also called grasping force), and friction force FfF_{f} from the tissue. The middle inset sketched the forces applied to the tissue, where FpF_{p} is the pulling force from the tissue body, Fg=FgF^{\prime}_{g}=F_{g} and Ff=FfF^{\prime}_{f}=F_{f} are from the forceps. The right inset shows the forces that generate the momentum of one jaw about joint j2j_{2}. FdF_{d} is measured by the proximal force sensor, while the driving distance tdt_{d} is measured by the upper driver’s encoder of the micro-level actuator.

The forces applied to the forceps when they grasp and pull a tissue are shown in Fig. 3(c). For convenience, we adopt FF_{-} to denote the magnitude of force 𝐟\mathbf{f}_{-} in the following sections. FsF_{s} is the elastic force from the spring and is directly estimated by (1). FdF_{d} is the driving force applied to the forceps’ hinge and measured by the proximal single-axis force sensor. FgF_{g} and FfF_{f} are contact and friction forces from the grasped tissue, as shown in Fig. 3(c). For ease of understanding, we define FgF_{g} as the gripping force in the following description.

Assuming the momentum of one jaw about joint j2j_{2} is zero, as sketched in the right inset of Fig. 3(c), we can further obtain the relationship between Fg1F_{g_{1}} and FdF_{d} as

Fg1=Fdl2sinα2l3,\displaystyle F_{g_{1}}=\frac{F_{d}\,l_{2}\sin{\alpha}}{2\,l_{3}}\,, (35)

where α=θ2+α0\alpha=\frac{\theta}{2}+\alpha_{0}, the definitions of θ\theta, α\alpha, α0\alpha_{0}, l2l_{2}, and l3l_{3} are shown in Fig. 3(b). θ\theta and α0\alpha_{0} can be calculated based on the geometric relationship of the forceps’ links. Fig. 3(a) shows the forceps’ geometry at the initial state, i.e., the forceps are in the closed form, and the flexure is relaxed. According to the relationship of forceps’ links at the initial state, we can get α0\alpha_{0} and formulate it as

α0=arccosl1,22+l22l122l1l1,2,\alpha_{0}=\arccos\frac{{l_{1,2}^{2}+l_{2}^{2}-l_{1}^{2}}}{{2\,{l_{1}}{\,}{l_{1,2}}}}\,, (36)

where l1,2l_{1,2} is the distance between forceps’ two joints j1j_{1} and j2j_{2}. When the forceps are open at angle θ\theta, the geometry changes to Fig. 3(b). At this stage, the angle α{\alpha} can be reformulated as

α=arccosl1,22+l22l122l1l1,2,{\alpha}=\arccos\frac{{l_{1,2}^{\prime 2}+l_{2}^{2}-l_{1}^{2}}}{{2\,{l_{1}}\,{l_{1,2}^{\prime}}}}\,, (37)

where l=1,2l+1,2tdtsl{{}_{1,2}^{\prime}}=l{{}_{1,2}}+{t}_{d}-{t}_{s}, tdt_{d} is the movement of the driving cable measured by the upper driver’s encoder of the micro-level actuator, and tst_{s} is the transformation of the sensor’s head estimated by the sensing module. Moreover, the relationship of θ\theta, α\alpha, and α0\alpha_{0} is

θ=2(αα0).\theta=2(\alpha-\alpha_{0})\,. (38)

For a pair of forceps, we assume Fg1F_{g_{1}} equals Fg2F_{g2}, and Ff1F_{f_{1}} equals Ff2F_{f_{2}}. Then, referring to (34), we can formulate FgF_{g} as

Fg=Fdl2sinα2l3=(Fp+Fs)l2sinα2l3.F_{g}=\frac{F_{d}\,l_{2}\sin{\alpha}}{2\,l_{3}}=\frac{(F_{p}+F_{s})\,l_{2}\sin{\alpha}}{2\,l_{3}}\,. (39)

III Experiments & Discussion

In this section, we first present the experiments on evaluating the method for target tracking and calibrating the stiffness matrix for external force estimation. Then, the evaluation of two modes of force estimation is described. Finally, groups of automatic robotic grasping procedures with the proposed system are illustrated as potential applications.

Refer to caption
Figure 4: (a) The experimental setup for evaluating pose estimation. The inset shows the configuration, where the EM tracking sensor was installed concentrically to the target. (b) The sensor’s head was moved towards xx, yy and zz directions during the experiments, and tzt_{z} denotes the transformation along zz axis. (c) The orientation comparison between nine pairs of points that were estimated by the EM tracking system (\star) and camera (\ast). (d) The position comparison between the continued EM tracking and camera estimation results.

III-A Evaluation of Target’s Pose Estimation

III-A1 Experimental Setup

Fig. 4(a) shows the experimental setup for pose estimation evaluation with an electromagnetic (EM) tracking system (Northern Digital Inc, Canada) with resolutions less than 0.1mm in position and 0.1 in orientation. The EM tracking sensor was installed concentrically to the proposed force sensing module. During the experiment, the sensor’s head was moved along xx (red arrow), yy (green arrow), and zz (blue arrow) as indicated in Fig. 4(b).

III-A2 Pose Estimation Results

The orientation comparison is based on nine pairs of samples distributed in the sensing module’s workspace, as plotted in Fig. 4(c). Here, we use (αC,i\alpha_{C,i}, βC,i\beta_{C,i}, γC,i\gamma_{C,i}) and (αE,i\alpha_{E,i}, βE,i\beta_{E,i}, γE,i\gamma_{E,i}) to denote the pitch, roll, and yaw angles of camera estimation and EM tracking results of the ii-th pair. The max mean and absolute maximum deviations between EM tracking and our estimation result are calculated by max1n(i=1nαC,iαE,i,i=1nβC,iβE,i,i=1nγC,iγE,i)\max\frac{1}{n}({\sum\limits_{i=1}^{n}{{\alpha_{C,i}}-{\alpha_{E,i}}}}\,,{\sum\limits_{i=1}^{n}{{\beta_{C,i}}-{\beta_{E,i}}}}\,,{\sum\limits_{i=1}^{n}{{\gamma_{C,i}}-{\gamma_{E,i}}}}) and max(maxi=1n(αC,iαE,i),maxi=1n(βC,iβE,i),maxi=1n(γC,iγE,i))\max(\max\limits_{i=1}^{n}({\alpha_{C,i}}-{\alpha_{E,i}})\,,\max\limits_{i=1}^{n}({\beta_{C,i}}-{\beta_{E,i}})\,,\max\limits_{i=1}^{n}({\gamma_{C,i}}-{\gamma_{E,i}})), where n=9n=9. For position evaluation, the EM sensor and target were tracked continuously by the EM tracking system and camera, respectively, and the comparison is shown in Fig. 4(d). To calculate the mean and max deviations between the EM and camera-tracking traces, we further calculated the mean (dad_{a}) and Hausdorff (dhd_{h}) distance by

{da=max(1ni=1ndCi,E,1mj=1mdEj,C)dh=max(maxi=1n(dCi,E),maxj=1m(dEj,C)),\displaystyle\left\{\begin{array}[]{l}d_{a}=\max(\frac{1}{n}\sum\limits_{i=1}^{n}{{d_{{C_{i}},E}}}\,,\frac{1}{m}\sum\limits_{j=1}^{m}{{d_{{E_{j}},C}}})\\ {d_{h}}=\max(\mathop{\max}\limits_{i=1}^{n}({d_{C_{i},E}}),\mathop{\max}\limits_{j=1}^{m}({d_{E_{j},C}}))\end{array}\right.\,, (42)

where dCi,E=minj=1m(𝐩C,i𝐩E,j2){d_{{C_{i}},E}}=\mathop{\min}\limits_{j=1}^{m}(||{\mathbf{p}_{C,i}-\mathbf{p}_{E,j}}||_{2}), 𝐩C,i\mathbf{p}_{C,i} and 𝐩E,j\mathbf{p}_{E,j} denote positions of the camera estimation of point ii and EM tracking of point jj, dEj,C=mini=1n(𝐩E,j𝐩C,i2){d_{{E_{j}},C}}=\mathop{\min}\limits_{i=1}^{n}(||{\mathbf{p}_{E,j}-\mathbf{p}_{C,i}}||_{2}), nn=1336 and mm=2181 are the numbers of points recorded by the camera and EM tracking system, respectively. As listed in Table I, the max mean, absolute maximum, and root mean square (RMS) deviations of the orientation are 0.056rad, 0.147rad, and 0.041rad, and those of the position are 0.0368mm, 0.2265mm, and 0.0551mm. This indicates that the proposed method can track and estimate the target’s pose reliably.

TABLE I: Evaluation of orientation and position deviation
Experiment Max Mean Maximum RMS
Orientation 0.056rad 0.147rad 0.041rad
Position 0.0368mm 0.2265mm 0.0551mm

III-B Stiffness Matrix Calibration and Verification

III-B1 Experimental Setup

We used a series of weights to calibrate the haptics-enabled forceps’ stiffness matrix, considering the influence of the driving cable and soft shell. The calibration platform is shown in Fig. 5(a). Here, the micro-level actuator is used to adjust the position and hold the forceps. An orientation module is installed concentrically to the forceps for adjusting the direction of the pulling force provided by a cable. This cable is connected to the forceps’ jaws, and the force FwF_{w} applied to it is adjusted by adding/removing weights.

Refer to caption
Figure 5: (a) The experimental setup for stiffness matrix calibration. The forceps were pulled by a cable connected with changeable weights. The orientation module was used to adjust the pull direction of the forceps. (b) shows the setup for calibration data collection. (c) shows the setup for verification data collection, where the orientation module was rotated 45 relative to that for calibration. v1v_{1}, v2v_{2}, v3v_{3}, and v4v_{4} indicate the pulling directions for verification in xx-yy panel. (d) and (e) show the verification results in xx-yy panel and zz-axial direction, respectively. FwF_{w} and FeF_{e} denote the force generated by the weight and the estimated result.

III-B2 Calibration and Verification

Two groups of data were collected for calibration and verification, respectively. When collecting the data for calibration in the xx-yy panel, as shown in Fig. 5(b), the orientation module made the cable align with the xx and yy axis, and the force was increased from 0N to 0.49N at intervals of 0.07N. Then, for the zz axis, the driving force was increased from 0N to 5N at intervals of 0.5N. The target pose for each interval was recorded. Then, substituting the calibration data to 𝐊s=𝐟s/𝐝s\mathbf{K}_{s}=\mathbf{f}_{s}/\mathbf{d}_{s} in MATLAB, we obtained the stiffness matrix 𝐊s\mathbf{K}_{s} as

𝐊s=[0.95920.09320.01840.12100.88070.01700.00130.00123.8520].\displaystyle\mathbf{K}_{s}=\left[{\begin{array}[]{*{20}{c}}{0.9592}&{0.0932}&{-0.0184}\\ {0.1210}&{0.8807}&{-0.0170}\\ {0.0013}&{0.0012}&{3.8520}\end{array}}\right]\,. (46)

To verify the calibrated 𝐊s\mathbf{K}_{s}, we rotated the orientation module for 4545^{\circ}, as shown in Fig. 5(c), and collected the data following the same procedure of calibration. For the xx-yy panel the interval was set to 0.035N, while for the zz axis the interval was 0.02N. The comparison of weights and estimated forces are plotted in Fig. 5(d) and (e), which show the results of the xx-yy panel and the zz-axial direction, respectively. The absolute mean, maximum, and RMS errors of (fxf_{x}, fyf_{y}, fzf_{z}) in these three comparisons are (0.00640.0064, 0.00450.0045, 0.03830.0383)N, (0.02330.0233, 0.01540.0154, 0.19960.1996)N, and (0.8494, 0.5884, 4.9367)N, which are (1.86%, 1.30%, 0.76%), (6.71%, 4.46%, 3.97%), and (2.45%, 1.70%, 0.98%) of the measured force amplitude (MFA), respectively. This indicates that the calibrated 𝐊s\mathbf{K}_{s} can be used to estimate the force applied to the forceps.

Refer to caption
Figure 6: The evaluation of Mode-I (FpF_{p} estimation) in various directions. (a) A grasped tissue was pulled by a cable from the initial state o0 to +y+y (o1), +x+x (o2), y-y (o3) and x-x (o4). Snapshots show states when the spring has maximum deformation in different orientations, and the corresponding camera frames are attached below them. (b) FpF_{p} estimated by our proposed sensing module (oe,i) and that measured by the reference commercial sensor (or,i).
TABLE II: Errors of FpF_{p} [N] estimation in various orientation
Error o1 o2 o3 o4
Mean 0.030(4.41%) 0.038(5.15%) 0.028(4.01%) 0.023(3.91%)
Max 0.077(11.17%) 0.131(17.75%) 0.128(14.42%) 0.064(10.69%)
RMS 0.038(5.50%) 0.049(6.69%) 0.043(6.16%) 0.028(4.74%)

III-C Evaluation of Mode-I: FpF_{p} Estimation

In this experiment, the grasped tissue was pulled in various directions by a cable connected to a force sensor. Fig. 6(a) shows that the grasped tissue is pulled from the initial state o0 to +y+y (o1), +x+x (o2), y-y (o3), and x-x (o4). Snapshots show the states when the flexure has maximum deformation in different orientations, and the corresponding camera registration frame are attached below them. Fig. 6(b) shows the forces estimated by the proposed sensing module (oe,i) and that measured by the commercial sensor (or,i) connected to the tissue-pulling cable. Table II lists the comparison result of FpF_{p} between oe,i and or,i. The absolute mean, maximum, and RMS errors of the four tests are under 0.038N (5.15% of MFA), 0.131N (17.75% of MFA), and 0.049N (6.69% of MFA), which means the forceps are valid for estimating FpF_{p} in various directions and meets the requirement of general surgical applications (for example, the clinical review [29] indicated that an absolute error of less than 0.28N can meet the requirement for tissue retraction with grasping in general surgery).

III-D Evaluation of Mode-II: FgF_{g} Estimation

III-D1 Experimental Setup

Fig. 7(a) shows the experimental setup, where a commercial pressure sensor (FlexiForce A201-1 lbs, Tekscan, USA), with a resolution of 0.02N, and a force sensor (ZNLBS-5kg, CHINO, China) were adopted as references. The pressure sensor was grasped by the forceps for direct measurement of the grasping force. Because the pressure sensor’s sensing area was a 10mm-diameter circle, 3D-printed additional surfaces were attached to the forceps’ jaws to ensure sufficient contact. The driving force reached around 7N during the grasping action in each experiment. Since, from (39), we can see the grasping force FgF_{g} is also related to pulling force FpF_{p}, we connected the pressure sensor to a commercial force sensor by a flexure to measure the applied pulling force.

III-D2 Evaluations

We carried out three experiments with different θ\theta configurations. Each experiment was presented as a continuous process, and the results are shown in Fig. 7(b), (c), and (d), where the subscript ii\in(1,2,3) denotes the ii-th experiment that corresponds to θ=10\theta=10^{\circ}, 3030^{\circ}, and 5050^{\circ}. The forceps grasped the pressure sensor and reached the target grasping force during 0\sim18s. After a pause phase (18\sim26s), the forceps pulled the grasped sensor backward for 25mm in the pulling phase (26\sim65s).

Fig. 7(b) depicts FdF_{d} provided by the driving cable and FsF_{s} measured by our proposed sensing module. FsF_{s} and FdF_{d} were almost equal in the grasping and pause phases. This is because the pulling force applied to the forceps was almost zero in these two phases, as there was no pulling action, and the pressure sensor was floating. On the contrary, in the pulling phase, FdF_{d} increased while FsF_{s} kept constant. This is because the pressure sensor has been pulled (Fp=FdFsF_{p}=F_{d}-F_{s}), but the flexure deformation kept constant. The difference between FdF_{d} and FsF_{s} was the pulling force FpF_{p} applied to the pressure sensor. As the three experiments followed the same procedure, their data almost overlay each other. However, because θ\theta is configured differently, the time for performing the grasping is slightly different. The inset in Fig. 7(b) shows that the bigger θ\theta is, the quicker the grasping finishes.

Fig. 7(c) shows the commercial sensor measured pulling force Fpr,iF_{p_{r,i}} in the experimental procedure, and we compared it with forceps estimated result Fpe,iF_{p_{e,i}}. We can see that the pulling force of these three experiments almost overlay each other as the pulling distances were equal. A comparison between the commercial sensor results FprF_{p_{r}} and our estimations FpeF_{p_{e}} is listed in Table III. The mean, maximum, and RMS errors of the three experiments are under 0.133N, 0.497N, and 0.167N, respectively. This also reflects that the forceps are valid for estimating the pulling force when they grasp tissue.

Refer to caption
Figure 7: The evaluation of Mode-II (FgF_{g} estimation). (a) The experimental setup, where a commercial pressure sensor and a force sensor were used as references for grasping and pulling forces, respectively. The inset shows three 3D-printed contact surfaces configured with θ\theta = 1010^{\circ}, 3030^{\circ} and 5050^{\circ}. (b) shows FdF_{d} provided by the driving cable and FsF_{s} measured by our proposed force sensing module. Their difference is the pulling force FpF_{p} applied to the grasped pressure sensor. Subscript ii\in(1,2,3) denotes the ii-th experiment that correspond to θ=10\theta=10^{\circ}, 3030^{\circ} and 5050^{\circ}. The inset shows that grasping time varies for different θ\theta, and the red arrow shows the increased direction of θ\theta. (c) The comparison of pulling force FpF_{p}, where Fpe,iF_{p_{e,i}} and Fpr,iF_{p_{r,i}} denote the forces of the ii-th experiment estimated by our proposed sensing module and that measured by the left commercial force sensor shown in (a), respectively. (d) The comparison of grasping force FgF_{g}, where Fge,iF_{g_{e,i}} and Fgr,iF_{g_{r,i}} denote the forces of the ii-th experiment estimated by our proposed sensing module and that measured by the reference commercial pressure sensor, respectively.
TABLE III: Errors of FgF_{g} and FpF_{p} estimation in grasping procedure
Force Error 1010^{\circ} 3030^{\circ} 5050^{\circ}
Mean 0.118(4.81%) 0.094(3.88%) 0.133(5.50%)
FpF_{p} [N] Max 0.299(12.21%) 0.278(11.48%) 0.497(20.51%)
RMS 0.145(5.94%) 0.112(4.62%) 0.167(6.87%)
Mean 0.040(8.13%) 0.027(4.90%) 0.012(2.00%)
FgF_{g} [N] Max 0.133(27.33%) 0.096(17.37%) 0.049(8.05%)
RMS 0.049(10.03%) 0.036(6.50%) 0.016(2.67%)

Fig. 7(d) compares the grasping force FgF_{g}, where Fge,iF_{g_{e,i}} and Fgr,iF_{g_{r,i}} denote the forces of ii-th experiment estimated by our proposed forceps and commercial pressure sensor, respectively. Because of the additional surfaces that are shown in the inlet of Fig. 7(a), l3l_{3} in (39) was set as l3l^{\prime}_{3} in these experiments. According to Fig. 7(d), we can see that when the driving force FdF_{d} is equal, the grasping force FgF_{g} is positively related to θ\theta. This phenomenon conforms with the force calculation method (34) and (39), as the increased FpF_{p} could entail greater FdF_{d} and then result in increased FgF_{g}. Table III lists the absolute mean, maximum, and RMS errors, and their percentage of MFA, between FgrF_{g_{r}} and FgeF_{g_{e}} of the three experiments. Their values are under 0.040N, 0.133N, and 0.049N, respectively. This indicates that the forceps are valid for estimating the grasping force and meet the requirements of surgical applications [29].

III-E Ex vivo Robotic Experiments

III-E1 Experimental Setup

To further verify the feasibility of the proposed system, we conducted a group of robotic experiments on an ex vivo tissue. Fig. 8 shows the experimental setup, where a UR-5 robotic arm was adopted as the macro-level actuator. The ex vivo chicken tissue was placed in a human body phantom to simulate the lesion, and the instrument was implemented through a simulated minimally invasive port on the phantom.

III-E2 Automatic Tissue Grasping

We carried out automatic tissue grasping procedure with one targeted grasping force FgF_{g}^{\ast} and two different targeted pulling forces Fp,1F_{p}^{\ast,1} and Fp,2F_{p}^{\ast,2}. The key experimental scenes and results are shown in Fig. 9, and the experiments can be roughly divided into five phases. Transformations between these phases were automatically performed depending on the grasping force FgF_{g} and pushing/pulling force FpF_{p} estimated by the haptics-enabled forceps.

With Mode-I enabled, the forceps were driven to touch the tissue during the touching phase (0\sim7.5s) until FpF_{p} reached the touching detection threshold Fp,tF_{p}^{\ast,t} followed by a short pending period (7.5\sim9.5s). Then, Mode-II was enabled, and the forceps performed the tissue grasping in the grasping phase (9.5\sim13.5s) until FgF_{g} reached the targeted value FgF_{g}^{\ast}. Finally, with another short pending period (13.5\sim15.5s), the grasped tissue was pulled up and held for a while with targeted pulling force FpF_{p}^{\ast}. Because the targeted pulling force FpF_{p}^{\ast} for the first and second group experiments were different, the time consumed for pulling varied. For the first group with Fp,1=0.4F_{p}^{\ast,1}=0.4N, the period was around 6s, while for the second group with Fp,2=0.2F_{p}^{\ast,2}=0.2N, it was around 2s.

Fig. 9(b) shows the measured driving force FdF_{d} and the estimated elastic force FsF_{s} of these experiments. Fig. 9(c) shows the estimated FgF_{g}, where Fg=0.4F_{g}^{\ast}=0.4N is the targeted grasping force for grasping action. Fig. 9(d) shows the estimated FpF_{p}, where Fg,t=0.05F_{g}^{\ast,t}=-0.05N is the threshold for touch detection, while Fp,1=0.4F_{p}^{\ast,1}=0.4N and Fp,2=0.2F_{p}^{\ast,2}=0.2N are two target pulling forces for the first and second group experiments, respectively. According to the experiment results, we can see that the haptics-enabled forceps can be implemented in robotic surgery for multi-modal force sensing. We also noticed that, during the grasping process, the pulling force FpF_{p} varies notably, which can potentially indicate successful grasping.

Refer to caption
Figure 8: The setup of robotic experiments. A 3D-printed human body phantom was used for placing ex vivo chicken tissue that simulated the lesion. The instrument was implemented through a simulated MIS port on the phantom.
Refer to caption
Figure 9: The results of automatic tissue grasping experiments, where experiments can be classified into two groups with two different target grasping forces FpF_{p}^{\ast}. Experiments of the first group are marked with footnotes 1, 2, 3, and 4, while those of the second are marked with footnotes 5, 6, 7, and 8. (a) Key snapshots of the experimental procedure are attached with target tracking results. (b) shows the measured driving force FdF_{d} and the estimated elastic force FsF_{s} of these experiments. Plot (b) also shows the experimental phases. Touching means the forceps touch the targeted tissue in the opened form. Pending means the speeds of the system’s motors have been set to zero for waiting. Grasping means the forceps grasp the touched tissue. Pulling means the forceps pull the grasped tissue. (c) shows the estimated FgF_{g}, where FgF_{g}^{\ast}=0.4N is the targeted grasping force for grasping action. The partial enlargement shows each experiment has reached FgF_{g}^{\ast} successfully and is marked with a hexagon. (d) shows the estimated FpF_{p}, where FtF_{t}^{\ast}=-0.05N is the threshold for touch detection, while Fp,1F_{p}^{\ast,1}=0.4N and Fp,2F_{p}^{\ast,2}=0.2N are two target pulling forces for the first and second group experiments, respectively. The left partial enlargement shows the touching detection and is marked with a four-pointed star for each experiment. The two right partial enlargements show the reaching of FpF_{p}^{\ast} of two group experiments and are marked with the pentagram.

III-F Discussion

The accuracy, sampling frequency, and resolution of the sensing module can be reconfigured with different cameras and flexures, which can be defined according to the surgical task requirements. Currently, referring to [28], we linearly modeled the integrated haptics-enabled forceps with the calibrated 𝐊s\mathbf{K}_{s} to have a simple calculation complexity with acceptable accuracy. The accuracy can potentially be improved with a non-linear model, which can be obtained by more advanced calibrations and fitting methods. The sampling frequency of the proposed sensing module is equal to the camera rate, 30Hz in the current prototype, and no filter has been applied to the collected data yet. The sensing resolution depends on the camera’s resolution and the spring’s stiffness, and that of the prototype is 0.001N in the xx-yy panel and 0.005N in the zz-axis. The resolution in the xx-yy panel is higher than in the zz-axis because the camera is more sensitive in the xx-yy panel displacement [21], and the spring stiffness is higher in the zz-axis. To reduce the anisotropy in resolution, a hemisphere-shaped target is potentially helpful [22]. Moreover, to ensure the feasibility of closed-loop force control, the filter to improve accuracy and smoothness should also be studied carefully.

Since the adopted sensing method relies on the spring’s deformation, estimated by cameras tracking the target mounted on the opposite end of the spring, the sensing module is limited to estimating forces where the applied location ranges from the target to the forceps’ tip. Additionally, this paper focuses on estimating the multi-modal forces when the forceps pull/push and grasp tissues, and only the displacement 𝐝s\mathbf{d}_{s} is utilized for force estimation. Because the estimated transformation 𝐓MtM0{}^{M_{0}}_{M_{t}}\mathbf{T} also contains the target’s current pose information, the wrench is potentially estimable [21, 22] and will be considered in future works.

Additionally, the proposed haptics-enabled forceps have minimized tool-side circuitry including only two micro-sized LEDs and an endoscopy, while the power supply and signal processing circuits are integrated at the proximal end and are removable, and the adopted camera can be sterilized repeatably, such as Ethylene oxide and STERRAD sterilization [20]. This improves the haptics-enabled forceps’ potential to be sterilized repeatedly as well.

Although in this paper, the biopsy forceps are adopted as an example, the presented sensing module can be integrated into a wide range of forceps with similar structures. The sensing module can probably be used as an independent sensor for other surgical applications, for example, palpation. Moreover, inspired by the robotic experiments, the successful grasp can potentially be indicated by the variation of FpF_{p} during each grasping process, which will be investigated in future works.

IV Conclusion

This paper presented a vision-based force sensing module that is adaptive to micro-sized biopsy forceps. An algorithm was designed to calculate the force applied to the sensing module with a registration method for tracking and estimating the pose of the sensing module’s target. Integrating the developed sensing module into the biopsy forceps, in conjunction with a single-axis force sensor at the proximal end, a haptics-enabled forceps was further proposed. Mathematical equations were derived to estimate the multi-modal force sensing of the haptics-enabled forceps, including pushing/pulling force (Mode-I) and grasping force (Mode-II). The methods for estimating multi-modal forces were presented with experimental verification. Groups of automatic robotic ex vivo tissue grasping procedures were conducted to further verify the feasibility of the proposed sensing method and forceps. The results show that the proposed method can enable multi-modal force sensing of the micro-sized forceps, and the haptics-enabled forceps are potentially beneficial to automatic tissue manipulation in operations such as thyroidectomy, ENT surgery, and laparoscopic surgery. The forceps will be integrated into a dual-arm robotic surgical system to further study the benefits of multi-modal force sensing for MIS.

References

  • [1] P. E. Dupont, B. J. Nelson, M. Goldfarb, B. Hannaford, A. Menciassi, M. K. O’Malley, N. Simaan, P. Valdastri, and G.-Z. Yang, “A decade retrospective of medical robotics research from 2010 to 2020,” Science Robotics, vol. 6, no. 60, p. eabi8017, 2021.
  • [2] P. E. Dupont, N. Simaan, H. Choset, and C. Rucker, “Continuum robots for medical interventions,” Proceedings of the IEEE, vol. 110, no. 7, pp. 847–870, 2022.
  • [3] L. Wu, F. Yu, T. N. Do, and J. Wang, “Camera frame misalignment in a teleoperated eye-in-hand robot: Effects and a simple correction method,” IEEE Transactions on Human-Machine Systems, vol. 53, no. 1, pp. 2–12, 2022.
  • [4] F. Feng, Y. Zhou, W. Hong, K. Li, and L. Xie, “Development and experiments of a continuum robotic system for transoral laryngeal surgery,” International Journal of Computer Assisted Radiology and Surgery, vol. 17, no. 3, pp. 497–505, 2022.
  • [5] Y. Cao, Z. Liu, H. Yu, W. Hong, and L. Xie, “Spatial shape sensing of a multisection continuum robot with integrated dtg sensor for maxillary sinus surgery,” IEEE/ASME Transactions on Mechatronics, 2022.
  • [6] R. V. Patel, S. F. Atashzar, and M. Tavakoli, “Haptic feedback and force-based teleoperation in surgical robotics,” Proceedings of the IEEE, vol. 110, no. 7, pp. 1012–1027, 2022.
  • [7] A. Talasaz, A. L. Trejos, and R. V. Patel, “The role of direct and visual force feedback in suturing using a 7-dof dual-arm teleoperated system,” IEEE transactions on haptics, vol. 10, no. 2, pp. 276–287, 2016.
  • [8] G.-Z. Yang, J. Cambias, K. Cleary, E. Daimler, J. Drake, P. E. Dupont, N. Hata, P. Kazanzides, S. Martel, R. V. Patel et al., “Medical robotics—regulatory, ethical, and legal considerations for increasing levels of autonomy,” p. eaam8638, 2017.
  • [9] A. Razjigaev, A. K. Pandey, D. Howard, J. Roberts, and L. Wu, “End-to-end design of bespoke, dexterous snake-like surgical robots: A case study with the raven ii,” IEEE Transactions on Robotics, 2022.
  • [10] J. Wang, J. Peine, and P. E. Dupont, “Eccentric tube robots as multiarmed steerable sheaths,” IEEE Transactions on Robotics, vol. 38, no. 1, pp. 477–490, 2021.
  • [11] A. Attanasio, B. Scaglioni, E. De Momi, P. Fiorini, and P. Valdastri, “Autonomy in surgical robotics,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 4, pp. 651–679, 2021.
  • [12] P. S. Zarrin, A. Escoto, R. Xu, R. V. Patel, M. D. Naish, and A. L. Trejos, “Development of a 2-dof sensorized surgical grasper for grasping and axial force measurements,” IEEE Sensors Journal, vol. 18, no. 7, pp. 2816–2826, 2018.
  • [13] W. Lai, L. Cao, J. Liu, S. C. Tjin, and S. J. Phee, “A three-axial force sensor based on fiber bragg gratings for surgical robots,” IEEE/ASME Transactions on Mechatronics, vol. 27, no. 2, pp. 777–789, 2021.
  • [14] A. Taghipour, A. N. Cheema, X. Gu, and F. Janabi-Sharifi, “Temperature independent triaxial force and torque sensor for minimally invasive interventions,” IEEE/ASME Transactions on Mechatronics, vol. 25, no. 1, pp. 449–459, 2019.
  • [15] T. Li, A. Pan, and H. Ren, “A high-resolution triaxial catheter tip force sensor with miniature flexure and suspended optical fibers,” IEEE Transactions on Industrial Electronics, vol. 67, no. 6, pp. 5101–5111, 2019.
  • [16] Q. Jiang, J. Li, and D. Masood, “Fiber-optic-based force and shape sensing in surgical robots: a review,” Sensor Review, vol. 43, no. 2, pp. 52–71, 2023.
  • [17] U. Kim, Y. B. Kim, J. So, D.-Y. Seok, and H. R. Choi, “Sensorized surgical forceps for robotic-assisted minimally invasive surgery,” IEEE Transactions on Industrial Electronics, vol. 65, no. 12, pp. 9604–9613, 2018.
  • [18] D.-Y. Seok, Y. B. Kim, U. Kim, S. Y. Lee, and H. R. Choi, “Compensation of environmental influences on sensorized-forceps for practical surgical tasks,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 2031–2037, 2019.
  • [19] Y. Sun, Y. Liu, and H. Liu, “Temperature compensation for a six-axis force/torque sensor based on the particle swarm optimization least square support vector machine for space manipulator,” IEEE Sensors Journal, vol. 16, no. 3, pp. 798–805, 2015.
  • [20] W. A. Rutala and D. J. Weber, Guideline for disinfection and sterilization in healthcare facilities (2008).   Disinfection and Sterilization, Centers for Disease Control and Prevention, 2008.
  • [21] R. Ouyang and R. Howe, “Low-cost fiducial-based 6-axis force-torque sensor,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 1653–1659.
  • [22] A. J. Fernandez, H. Weng, P. B. Umbanhowar, and K. M. Lynch, “Visiflex: A low-cost compliant tactile fingertip for force, torque, and contact sensing,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3009–3016, 2021.
  • [23] N. Haouchine, W. Kuang, S. Cotin, and M. Yip, “Vision-based force feedback estimation for robot-assisted surgery using instrument-constrained biomechanical three-dimensional maps,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 2160–2165, 2018.
  • [24] G. Fagogenis, M. Mencattelli, Z. Machaidze, B. Rosa, K. Price, F. Wu, V. Weixler, M. Saeed, J. E. Mayer, and P. E. Dupont, “Autonomous robotic intracardiac catheter navigation using haptic vision,” Science robotics, vol. 4, no. 29, p. eaaw1977, 2019.
  • [25] Z. Chua and A. M. Okamura, “Characterization of real-time haptic feedback from multimodal neural network-based force estimates during teleoperation,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2022, pp. 1471–1478.
  • [26] M. A. Pucheta and A. Cardona, “Design of bistable compliant mechanisms using precision–position and rigid-body replacement methods,” Mechanism and Machine Theory, vol. 45, no. 2, pp. 304–326, 2010.
  • [27] J. Deng, K. Rorschach, E. Baker, C. Sun, and W. Chen, “Topology optimization and fabrication of low frequency vibration energy harvesting microdevices,” Smart Materials and Structures, vol. 24, no. 2, p. 025005, 2014.
  • [28] S. Keller and A. Gordon, “Equivalent stress and strain distribution in helical compression springs subjected to bending,” The Journal of Strain Analysis for Engineering Design, vol. 46, no. 6, pp. 405–415, 2011.
  • [29] A. K. Golahmadi, D. Z. Khan, G. P. Mylonas, and H. J. Marcus, “Tool-tissue forces in surgery: A systematic review,” Annals of Medicine and Surgery, vol. 65, p. 102268, 2021.
  • [30] A. Banach, F. King, F. Masaki, H. Tsukada, and N. Hata, “Visually navigated bronchoscopy using three cycle-consistent generative adversarial network for depth estimation,” Medical image analysis, vol. 73, p. 102164, 2021.
  • [31] H. Singh, “Advanced image processing using opencv,” in Practical Machine Learning and Image Processing.   Springer, 2019, pp. 63–88.
  • [32] A. Kaspers, “Blob detection,” M.S. thesis, Image Sciences Institute, Utrecht Univ., Utrecht, Netherlands, 2011.
  • [33] A. Fetić, D. Jurić, and D. Osmanković, “The procedure of a camera calibration using camera calibration toolbox for matlab,” in 2012 Proceedings of the 35th International Convention MIPRO.   IEEE, 2012, pp. 1752–1757.
[Uncaptioned image] Tangyou Liu received the B.S. degree in mechanical engineering from the Southwest University of Science and Technology (SWUST) in 2017, and the M.S. degree in mechanical engineering as an outstanding graduate from the Harbin Institute of Technology, Shenzhen (HITsz) in 2021, supervised by Prof. Max Q.-H.Meng. Then, he worked in KUKA, China, as a system development engineer and was awarded the champion of the KUKA China R&D Innovation Challenge. He was awarded the IEEE ICRA2023 Best Poster. He is pursuing his Ph.D. degree in mechatronic engineering at the University of New South Wales (UNSW), Sydney, Australia, under the supervision of Dr. Liao Wu. His current research interests include medical and surgical robots.
[Uncaptioned image] Tinghua Zhang received the B.S. and M.S. degrees in Mechanical Engineering from Jimei University (JMU) in 2020 and Harbin Institute of Technology (Shenzhen) (HITSZ) in 2023, respectively. Now, he is working at The Chinese University of Hong Kong (CUHK) as a research assistant. His research interest is mainly in medical robotics, including robotic OCT, optical tracking and force sensing. He was awarded the best student paper finiallist of IEEE ROBIO 2022.
[Uncaptioned image] Jay Katupitiya received his B.S. degree in Production Engineering from the University of Peradeniya, Sri Lanka and Ph.D degree from the Katholieke Universiteit Leuven, Belgium. He is currently an associate professor at the University of New South Wales in Sydney, Australia. His main research interest is the development of sophisticated control methodologies for the precision guidance of field vehicles on rough terrain at high speeds.
[Uncaptioned image] Jiaole Wang received the B.E. degree in mechanical engineering from Beijing Information Science and Technology University, Beijing, China, in 2007, the M.E. degree from the Department of Human and Artificial Intelligent Systems, University of Fukui, Fukui, Japan, in 2010, and the Ph.D. degree from the Department of Electronic Engineering, The Chinese University of Hong Kong (CUHK), Hong Kong, in 2016. He was a Research Fellow with the Pediatric Cardiac Bioengineering Laboratory, Department of Cardiovascular Surgery, Boston Children’s Hospital and Harvard Medical School, Boston, MA, USA. He is currently an Associate Professor with the School of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen, China. His main research interests include medical and surgical robotics, image-guided surgery, human-robot interaction, and magnetic tracking and actuation for biomedical applications.
[Uncaptioned image] Liao Wu is a Senior Lecturer with the School of Mechanical and Manufacturing Engineering, University of New South Wales, Sydney, Australia. He received his B.S. and Ph.D. degrees in mechanical engineering from Tsinghua University, Beijing, China, in 2008 and 2013, respectively. From 2014 to 2015, he was a Research Fellow at the National University of Singapore. He then worked as a Vice-Chancellor’s Research Fellow at the Queensland University of Technology, Brisbane, Australia from 2016 to 2018. Between 2016 and 2020, he was affiliated with the Australian Centre for Robotic Vision, an ARC Centre of Excellence. He has worked on applying Lie groups theory to robotics, kinematic modeling and calibration, etc. His current research focuses on medical robotics, including flexible robots and intelligent perception for minimally invasive surgery.