Vision-Based Shape Reconstruction of Soft Continuum Arms Using a Geometric Strain Parametrization
Abstract
Interest in soft continuum arms has increased as their inherent material elasticity enables safe and adaptive interactions with the environment. However to achieve full autonomy in these arms, accurate three-dimensional shape sensing is needed. Vision-based solutions have been found to be effective in estimating the shape of soft continuum arms. In this paper, a vision-based shape estimator that utilizes a geometric strain based representation for the soft continuum arm’s shape, is proposed. This representation reduces the dimension of the curved shape to a finite set of strain basis functions, thereby allowing for efficient optimization for the shape that best fits the observed image. Experimental results demonstrate the effectiveness of the proposed approach in estimating the end effector with accuracy less than the soft arm’s radius. Multiple basis functions are also analyzed and compared for the specific soft continuum arm in use.
I Introduction
Bioinspired soft robots [1] use stretchable skins, muscles, fluids, fibers and tendons to deform in a continuum fashion. Soft Continuum Arms (SCAs) [2, 3, 4] are long and slender soft robots inspired by octopus arms and elephant trunks, and possess large spatial workspace and dexterity. The material elasticity combined with inherent damping enables safe and adaptable interaction with the surroundings. SCAs can be useful for several applications such as surgery [2], agriculture [5], search and rescue [6] etc.
Though useful, the curvilinear nature of deformation precludes traditional sensing methods such as encoders. Methods that are specific to sense continuum deformation such as Fiber Bragg Grating [7] and electromagnetic sensors are effective, but interfere with the felxibility of the SCA or are altered by environmental disturbances [8]. Alternatively, vision-based sensing [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24] has gained prominence as it is noninvasive and easy to implement. Vision-based sensing methods seek to reconstruct the 3D shape of the SCA from images obtained from a camera placed in close proximity to the arm.

Early efforts on shape reconstruction were limited to planar deformations with the camera placed perpendicular to the SCA’s bending plane [9, 10]. A two dimensional shape was estimated by first extracting the visual markers on the SCA from the image and then fitting a two-dimensional curve with piece-wise constant curvatures to these points. Recently, Fan et al. [25] proposed to replace the constant curvature with a linearly varying curvature that was fitted through image points of a SCA. The 3D shape of deformable objects was reconstructed in [19] by applying Self-Organized Maps (SOM) on point cloud data coming from a depth camera. Methods relying on triangulation of multiple cameras were also proposed using the shape-from-silhouette [14], modified SOM [15], and learning-based methods [12, 13]. Although these methods performed desirably, they require multiple cameras or depth cameras, which is not always practical, such as in endoscopic surgery [23]. To solve this, several methods have been proposed to estimate the shape of a flexible endoscopic instrument through optimizing for the minimum image reprojection error [21, 22, 23, 24]. The model these methods used was specifically for flexible tools in endoscopic surgery and do not apply to other types of continuum manipulators. The work presented in this paper generalizes these methods to any SCA, as long as it is within the field of view of the camera.
In this paper, we propose a geometric strain parameterization method to reconstruct the 3D shape of a fixed-length SCA using a wide-angle monocular camera. The shape of the soft arm is represented as a linear combination of a specified set of strain basis functions. Though a similar representation has been introduced in [26] to model soft arms, this paper investigates it in the context of shape reconstruction. The geometric strain parameterization allows for reconstructing the orientations of the SCA’s cross sections along its length as well as the full 3D shape. By adopting this representation, the dimensionality of the problem is reduced, thereby easing the estimation process, and making it possible to optimize for the shape that best fits the camera’s observations. We also provide a simple comparison of various basis functions based on their respective accuracy in sensing shape. Although not comprehensive, this comparison provides some insight on what basis performs best for the SCA used in this study.
The proposed shape reconstruction method works as follows. Given initial curvatures along the SCA, the projection of the shape onto the image space is first obtained. Section II elaborates the forward model that gives us the image projection of the SCA given its curvature. Then the error between this estimated projection and the observed projection in the camera output is measured. An optimization routine then updates the strain coefficient estimates in a direction that decreases this error. This is repeated until the desired stopping criteria is reached. More details on the approach are presented in Section III. The method is tested on a fiber reinforced SCA known as the manipulator [4] (shown in Fig. 1) that can undergo complex bending and twisting configurations. Discussion of the results is presented in Section IV.











II Model of the Soft Continuum Arm Projection
II-A Forward Model
The shape of the SCA can be described by a position vector and a rotation frame at each cross-section along its length . For convenience, the position and orientation are expressed in a compact way by joining them into a single matrix in the special Euclidean group [27]
The position and orientation of the center-points on the SCA’s cross-sections evolve, with respect to its length parameter , according to
(1) | ||||
(2) |
where is a vector that contains the stretching/shearing strains, contains the bending/twisting strains, and is the usual mapping of a vector in to a skew-symmetric matrix in , the real vector space of by skew-symmetric matrices.
This can also be written compactly as
(3) |
It is observed that knowing and the initial pose at the base is enough to reconstruct the full shape of the robot by integrating (3).
Given and an initial base pose , we would like to obtain the projection of the SCA onto the camera’s image sphere. This is achieved in two steps: integrating equation (3) to obtain the shape in 3D space, then projecting the 3D positions to the camera’s image sphere.
Various methods could be applied to integrate (3), simplest of which is to discretize the strains into a set of piecewise constant strains , where . After discretizing, it is possible to apply a simple first order integrator
(4) |
This approach might suffer from drifting, especially if not enough discretization points are considered. More sophisticated approaches for integrating on Lie groups are the Crouch–Grossman method and Munthe–Kaas method [28]. Regardless of which method is used, the result of the integration is expressed as .
II-B Camera Model
We consider the case where the camera is fixed in proximity to the base of the SCA. To capture the entire workspace of the SCA, a fisheye camera is used. We note here that the shape reconstruction method presented in this paper does not depend on the type of camera or model used and can be applied to any calibrated camera that is placed such that the SCA is in its field of view.
Without loss of generality, an image sphere is considered rather than an image plane since a wide-angle camera is considered [29]. The projection of a point onto the image sphere centered at the origin is given by
(5) |
Given a point on an image , its corresponding image sphere projection can be obtained through the following equations
(6) |
(7) |
where is a function that depends on the distance of an image point from the image center ,
(8) |
and is a normalizing scalar. The coefficients are dependent on the fisheye lens that is used and can be estimated through a calibration process [29].
II-C SCA Projection onto the Camera
In this work, knowledge of the SCA’s geometry is utilized by considering the envelope of its projection onto the image (the green curves in Figure 3). The explicit equations for this envelope are dependent on the geometry of the SCA being used, thus explicit equations are not presented. However we generally denote for the right and left boundaries of the projected envelope as , respectively.
III Vision-Based Shape Sensing
This section proposes a shape sensing method that utilizes a curvature based parametrization of the SCA. First, the parametrization is introduced (a similar parametrization was introduced in [26]), then an optimization method that utilizes this parametrization to estimate the robot’s shape is proposed. For simplicity, an inextendable/unshearalble SCA is assumed, thus .
III-A Parameterizing
Although lives, without further assumptions, in the space of continuous function with values in , for a specific SCA we assume it lives in a finite dimensional function space and cannot have any arbitrary shape. In other words, the possible curvature profiles that the SCA adheres to can be expressed as a linear combination of a set of basis functions
(9) |
where is the set of coefficients corresponding to the basis functions . Applying this to represent functions valued in the Lie algebra of results in
(10) |
(11) |
Choosing specific bases, we can recover using this viewpoint some of the representations used in the literature, for example:
-
•
Constant Curvature
-
•
Piecewise Constant Curvatures
-
•
Linear Curvature
Other basis functions may also be used, such as higher order polynomials or trigonometric functions. With this formulation, it is possible to analyze the performance of various basis functions in capturing the space of shapes that a specific SCA can perform, or perhaps learn a basis that best describes the shapes of a SCA. However this is out of the scope of this study.
III-B Optimization
Given an image and a basis that can accurately express the possible curvature profiles of a SCA, the full shape of the SCA can be reconstructed through finding the set of coefficients that minimize a suitable cost function ,
(12) |
From the image , we assume it is possible to extract image coordinates of the right and left boundary edges of the SCA’s projection, & , for a set of sample points on the SCA. Such task is possible using computer vision methods such as Mask RCNN [30], applying such methods is out of the scope of this study.
To quantify the fit between the estimated shape and observed shape, we take the square root error between & and the boundary lines & of the estimated shape for a given set of coefficients ,
IV Results and Discussions
In this section, results are provided for implementing vision-based reconstruction for our specific SCA, which is the manipulator [4]. The SCA consists of three parallel combinations of fiber reinforced actuators [32] that can bend, and twist (clockwise and counterclockwise) respectively. A spiral deformation mode can be obtained with this robot by pressurizing the bending and twisting tubes. For these experiments, the length of the arm was confined to and the diameter to . Since our robot can twist along its length and only bend in one direction, we have , where and are associated with the twisting and bending strains, respectively.
A fisheye camera with degree angle of view was fixed to the base of the SCA. The camera was calibrated using the MATLAB toolbox OCamCalib [29]. As in [23, 21, 22], multiple white visual markers are fixed on the SCA, as shown in Figure 1. Although the proposed method can be implemented without the markers, they allow for a consistent evaluation for the method across the SCA’s workspace. The marker positions on the image were manually identified. While this can be automated, in this work we desire to evaluate the accuracy of shape estimation without the influence of errors coming from the computer vision system.
To evaluate the accuracy of the shape estimator, an electromagnetic tracking system (Patriot SEU, Polhemus) was used to cross validate the pose measurements of the SCA’s end tip, its position accuracy was found to be around . It is important to note that equation (12) does not use the magnetic sensor data and only minimizes the reprojection error, the error between the projection of the estimated shape and the observed projection. The data from the magnetic sensor was only used to evaluate the accuracy of the tip pose estimate obtained from equation (12).





IV-A Experimental Procedure
A series of experiments were conducted to evaluate the accuracy of the proposed shape sensing method. In these experiments, various configurations were tested by applying different pressures for the bending and the twisting actuators. The bending and twisting actuator were given pressures each ranging from to psi and to psi, respectively. From the combinations of these pressures, 100 sample configurations were obtained for half of the robot’s workspace. At each sample, an image was captured and readings from the magnetic sensor were collected. These measurements were preprocessed to find unknown parameters such as the pose of the SCA’s base and the relative transformation between the camera frame and the magnetic source frame.
IV-B Pose of SCA’s Base
The SCA’s base with respect to the camera’s frame, , is unknown. Here we present an approach that utilizes the same optimization framework and provides an accurate estimate of the transformation. The relative transformation between the base of the SCA and the camera was obtained by finding the pose that minimizes the error between the observed projection of the base, & , and the projection of the estimated base throughout the whole set of images
(14) |
Once the initial pose was found, it was fixed and used to solve (12).
IV-C Pose Data from the Magnetic Sensor
The pose obtained from the magnetic sensor, , is with reference to a magnetic source inside the lab. However the estimate obtained from equation (12) is with respect to the camera frame. To be able to cross validate the estimated poses with the magnetic sensor readings, we need to find the relative transformation between the camera frame and the magnetic source frame, , in order to represent the data with respect to the camera frame. Furthermore, the magnetic sensor does not align exactly with the SCA’s tip, therefore we need to obtain the relative transformation between the magnetic sensor and the SCA’s tip, .
An estimate of these relative transformations were obtained by minimizing the error between the observed projection of the tip, & , and the projection of the transformed sensor readings throughout the whole set of images
(15) |
The pose was constrained to a subset of , translations along the x and z-axis and rotation around the x-axis. We numerically verified that equation (15) always converges locally to the same values. Once the transformations were estimated, the magnetic sensor readings were transformed to the corresponding tip position with respect to the camera frame. This was then used to measure the error in the estimated tip position and direction.
Order | Segments | Error in entire workspase | Error in region A | ||||||
---|---|---|---|---|---|---|---|---|---|
(Parameters) | c | c | |||||||
(degree) | (degree) | (degree) | (degree) | ||||||
0 | 1 (2) | 29.3 1.2 | 31.9 | 29.3 6.6 | 39.8 | 29.8 0.9 | 31.9 | 34.9 3.0 | 39.8 |
0 | 2 (4) | 13.0 9.0 | 28.2 | 12.2 4.7 | 21.3 | 6.1 5.3 | 28.2 | 9.9 3.1 | 16.5 |
1 | 1 (4) | 14.9 9.2 | 30.4 | 15.7 5.3 | 33.2 | 7.7 7.2 | 30.4 | 13.5 6.4 | 33.2 |
2 | 1 (6) | 15.1 9.1 | 30.4 | 15.6 5.3 | 33.2 | 8.2 7.7 | 30.4 | 13.6 6.4 | 33.2 |
3 | 1 (8) | 15.1 9.1 | 30.4 | 15.6 5.3 | 33.2 | 8.2 7.7 | 30.4 | 13.6 6.4 | 33.2 |
-
•
Soft robot has a length and diameter .
-
•
Region A is the upper half region of the workspace shown in Figure 5.
-
•
E1: Error in the tip position.
-
•
E2: Error in the tip direction angle.
IV-D Results and Analysis
The proposed method has been tested with five different basis functions for the curvature: constant, piecewise constant, linear, quadratic, and cubic functions. The weights, in equation (13), have been chosen to increase linearly with the length of the arm (i.e. more weight is given to the end tip). The mean, standard deviation, and maximum of the errors in the estimated tip position and direction are summarized in Table I. For the specific SCA being used in the experiment, the basis with minimum error is a constant piecewise function with two segments with an error of and degrees in the position and angle, respectively. Relative to the diameter of the SCA, the estimated tip position is, on average, within approximately half of the diameter. It is worthy to note that, for the BR2, increasing the order of the polynomial to the third order or more does not improve the results, as seen in the last line of Table I.
From the spatial distribution of the errors (see Figure 5) it can be observed that the region with maximum error is when the SCA is close to being fully extended. This could be due to the difficulty of getting accurate marker coordinates when the SCA is in this configuration. This region is also more sensitive to errors in the image coordinates of the detected markers, since small changes in the marker coordinates contribute to large displacements in the tip position. Another way to look at this is to observe that there is a certain configuration (outside of the robots workspace) where all the markers will get projected to the same point on the image, and thus shape reconstruction becomes difficult when this configuration is approached.
For some applications, the upper half of the work space might be more important (i.e. when the SCA is bending more). When considering only this region, an improvement in the estimation accuracy is observed. A two-segment piecewise basis achieves an accuracy of and degrees.
One source of error includes inaccuracies in camera calibration, especially since the fisheye lenses have high distortions on the edges compared to narrow angle lenses. This could be seen when the SCA’s tip is close to the edges of the cameras view, as in Figure 3 (e). These regions have higher errors due to the lens distortions. Fine tuning the camera calibration parameters could resolve this issue. Another source of error could be in the detection of the marker positions in the image. Since the number of markers is relatively low, small errors in their image coordinates will lead to errors in the overall shape estimation. This can be resolved by considering the entire projected curve rather than only on the markers, thus improving the estimation accuracy.
V Conclusion
Accurate 3D shape reconstruction of a soft continuum arm is essential for applications that require accurate interactions with the environment. The lack of accurate cost effective solutions for this problem makes it difficult to deploy autonomous SCAs in real world scenarios [5]. In this paper, a vision based approach for estimating the SCA’s shape was proposed. The method utilized a fish-eye camera attached to the base of the SCA that is able to see its whole workspace. A generic curvature based representation of the SCA’s shape was used to efficiently optimize for the shape that reduces the reprojection error. Results show the effectiveness of the proposed method, which gave results of accuracy less then the SCA’s diameter. This margin of error is acceptable for most applications.
Out of the tested basis functions, the best performance for the BR2 was achieved with two constant strain segments. The performance of other types of basis functions can be analyzed in the future for a more comprehensive comparison. Also, an interesting direction would be to learn a basis that describes the SCA’s shape the best.
The SCA used in this work bends only in one direction, therefore one camera was enough to capture its workspace. However for applications where a robot that bends in both direction is needed, this approach can be extended by placing multiple cameras around the base of the SCA. One drawback of using a camera to estimate the robot’s shape is its vulnerability when the SCA is occluded or partially occluded. This drawback can be dealt with by extending the proposed method to accept and fuse other sensor measurements that can be helpful, such as the internal pressures or a curvature sensor.
The work presented in this paper will be built upon to develop autonomous capabilities for SCA’s that would be useful in applications such as fruit harvesting [5], robotic care-giving, and surgery. More specifically, it can be applied as a feedback loop for controlling the SCA’s end-effector to a desired position. Also, it can be used for applications where the whole shape of the SCA is needed, such as detecting obstacles or estimating the contact forces applied on the SCA. To deploy the proposed system in real world application, realtime performance is needed. It is possible to optimize the proposed method for high-speed computation using realtime programming language and by utilizing parallelization.
References
- [1] D. Rus and M. T. Tolley, “Design, fabrication and control of soft robots,” Nature, vol. 521, no. 7553, pp. 467–475, 2015.
- [2] M. Cianchetti, T. Ranzani, G. Gerboni, I. De Falco, C. Laschi, and A. Menciassi, “STIFF-FLOP surgical manipulator: Mechanical design and experimental characterization of the single module,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, nov 2013, pp. 3576–3581.
- [3] S. Neppalli, B. Jones, W. McMahan, V. Chitrakaran, I. Walker, M. Pritts, M. Csencsits, C. Rahn, and M. Grissom, “OctArm-A soft robotic manipulator,” in Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on. IEEE, 2007, p. 2569.
- [4] N. K. Uppalapati and G. Krishnan, “Design and Modeling of Soft Continuum Manipulators Using Parallel Asymmetric Combination of Fiber-Reinforced Elastomers,” Journal of Mechanisms and Robotics, vol. 13, no. 1, feb 2021.
- [5] N. Kumar Uppalapati, B. Walt, A. Havens, A. Mahdian, G. Chowdhary, and G. Krishnan, “A Berry Picking Robot With A Hybrid Soft-Rigid Arm: Design and Task Space Control,” in Robotics: Science and Systems XVI. Robotics: Science and Systems Foundation, jul 2020.
- [6] W. McMahan, V. Chitrakaran, M. Csencsits, D. Dawson, I. D. Walker, B. A. Jones, M. Pritts, D. Dienno, M. Grissom, and C. D. Rahn, “Field trials and testing of the OctArm continuum manipulator,” in Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on, may 2006, pp. 2336–2341.
- [7] R. Xu, A. Yurkewich, and R. V. Patel, “Curvature , Torsion and Force Sensing in Continuum Robots Using Helically-Wrapped FBG Sensors,” IEEE Robotics and Automation Letters, vol. 1, no. 2, pp. 1052–1059, jul 2016.
- [8] C. Shi, X. Luo, P. Qi, T. Li, S. Song, Z. Najdovski, T. Fukuda, and H. Ren, “Shape sensing techniques for continuum robots in minimally invasive surgery: A survey,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 8, pp. 1665–1678, 2017.
- [9] M. Hannan and I. Walker, “Vision based shape estimation for continuum robots,” in Proceedings - IEEE International Conference on Robotics and Automation, vol. 3. IEEE, 2003, pp. 3449–3454.
- [10] M. W. Hannan and I. Walker, “Real-time shape estimation for continuum robots using vision,” Robotica, vol. 23, no. 5, pp. 645–651, 9 2005.
- [11] V. K. Chitrakaran, A. Behal, D. M. Dawson, and I. D. Walker, “Setpoint regulation of continuum robots using a fixed camera,” Robotica, vol. 25, no. 5, pp. 581–586, 2007.
- [12] A. Reiter, R. E. Goldman, A. Bajo, K. Iliopoulos, N. Simaan, and P. K. Allen, “A learning algorithm for visual pose estimation of continuum robots,” in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 9 2011, pp. 2390–2396.
- [13] A. Reiter, A. Bajo, K. Iliopoulos, N. Simaan, and P. K. Allen, “Learning-based configuration estimation of a multi-segment continuum robot,” in 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob). IEEE, 6 2012, pp. 829–834.
- [14] D. B. Camarillo, K. E. Loewke, C. R. Carlson, and J. K. Salisbury, “Vision based 3-D shape sensing of flexible manipulators,” in 2008 IEEE International Conference on Robotics and Automation. IEEE, 5 2008, pp. 2940–2947.
- [15] J. M. Croom, D. C. Rucker, J. M. Romano, and R. J. Webster, “Visual sensing of continuum robot shape using self-organizing maps,” in Proceedings - IEEE International Conference on Robotics and Automation, 2010, pp. 4591–4596.
- [16] R. A. Manakov, D. Kolpashchikov, V. Danilov, N. Laptev, I. Skirnevskiy, and O. Gerget, “Visual shape and position sensing algorithm for a continuum robot,” 14th International Forum on Strategic Technology (IFOST-2019), pp. 399–402, 2019.
- [17] R. C. Jackson, T. Liu, and M. C. Cavusoglu, “Catadioptric stereo tracking for three dimensional shape measurement of MRI guided catheters,” Proceedings - IEEE International Conference on Robotics and Automation, vol. 2016-June, pp. 4422–4428, 2016.
- [18] T. Greigarn, R. Jackson, and M. C. Çavuşoǧlu, “State Estimation for MRI-Actuated Cathers via Catadioptric Stereo Camera,” IEEE International Conference on Intelligent Robots and Systems, pp. 1795–1800, 2018.
- [19] S. Xu, G. Li, D. Song, L. Sun, and J. Liu, “Real-time Shape Recognition of a Deformable Link by Using Self-Organizing Map,” in 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE). IEEE, 8 2018, pp. 586–591.
- [20] Y. Chen, S. Zhang, L. Zeng, X. Zhu, and K. Xu, “Model-based estimation of the gravity-loaded shape and scene depth for a slim 3-actuator continuum robot with monocular visual feedback,” in Proceedings - IEEE International Conference on Robotics and Automation, vol. 2019-May, 2019, pp. 4416–4421.
- [21] R. Reilink, S. Stramigioli, and S. Misra, “Pose reconstruction of flexible instruments from endoscopic images using markers,” Proceedings - IEEE International Conference on Robotics and Automation, pp. 2938–2943, 2012.
- [22] ——, “3D position estimation of flexible instruments: Marker-less and marker-based methods,” International Journal of Computer Assisted Radiology and Surgery, vol. 8, no. 3, pp. 407–417, 2013.
- [23] P. Cabras, D. Goyard, F. Nageotte, P. Zanne, and C. Doignon, “Comparison of methods for estimating the position of actuated instruments in flexible endoscopic surgery,” IEEE International Conference on Intelligent Robots and Systems, no. Iros, pp. 3522–3528, 2014.
- [24] P. Cabras, F. Nageotte, P. Zanne, and C. Doignon, “An adaptive and fully automatic method for estimating the 3D position of bendable instruments using endoscopic images,” International Journal of Medical Robotics and Computer Assisted Surgery, vol. 13, no. 4, pp. 1–14, 2017.
- [25] J. Fan, E. Del Dottore, F. Visentin, and B. Mazzolai, “Image-based Approach to Reconstruct Curling in Continuum Structures,” 2020 3rd IEEE International Conference on Soft Robotics, RoboSoft 2020, pp. 544–549, 2020.
- [26] F. Renda, C. Armanini, V. Lebastard, F. Candelier, and F. Boyer, “A Geometric Variable-Strain Approach for Static Modeling of Soft Manipulators with Tendon and Fluidic Actuation,” IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 4006–4013, 2020.
- [27] K. Lynch and F. Park, Modern Robotics: Mechanics, Planning, and Control. Cambridge: Cambridge University Press, 2017.
- [28] J. Park and W. K. Chung, “Geometric integration on Euclidean group with application to articulated multibody systems,” IEEE Transactions on Robotics, vol. 21, no. 5, pp. 850–863, 2005.
- [29] D. Scaramuzza, A. Martinelli, and R. Siegwart, “A toolbox for easily calibrating omnidirectional cameras,” in IEEE International Conference on Intelligent Robots and Systems, 2006, pp. 5695–5701.
- [30] K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” Proceedings of the IEEE International Conference on Computer Vision, vol. 2017-Octob, pp. 2980–2988, 2017.
- [31] M. A. Branch, T. F. Coleman, and Y. Li, “Subspace, interior, and conjugate gradient method for large-scale bound-constrained minimization problems,” SIAM Journal of Scientific Computing, vol. 21, no. 1, pp. 1–23, 1999.
- [32] G. Singh and G. Krishnan, “A constrained maximization formulation to analyze deformation of fiber reinforced elastomeric actuators,” Smart Materials and Structures, vol. 26, no. 6, p. 065024, jun 2017.