SIS: Seam-Informed Strategy for T-shirt Unfolding
Abstract
Seams are information-rich components of garments. The presence of different types of seams and their combinations helps to select grasping points for garment handling. In this paper, we propose a new Seam-Informed Strategy (SIS) for finding actions for handling a garment, such as grasping and unfolding a T-shirt. Candidates for a pair of grasping points for a dual-arm manipulator system are extracted using the proposed Seam Feature Extraction Method (SFEM). A pair of grasping points for the robot system is selected by the proposed Decision Matrix Iteration Method (DMIM). The decision matrix is first computed by multiple human demonstrations and updated by the robot execution results to improve the grasping and unfolding performance of the robot. Note that the proposed scheme is trained on real data without relying on simulation. Experimental results demonstrate the effectiveness and generalization ability of the proposed strategy. The project home page is available at https://github.com/lancexz/sis
Index Terms:
Perception for grasping and manipulation, bimanual manipulation, manipulation planning.I Introduction
Garment unfolding remains an open challenge in robotics. Existing research utilizes folds [1, 2, 3], edges [4, 5], outline points, and structural regions [6, 7] as primary references for selecting grasping points, or uses a value map calculated using models trained in simulators [8, 9, 10, 11, 12]. However, none of these methods have utilized seam information. Seams are usually located in the contour position of the garment when fully unfolded. Consequently, the desirable grasping points for unfolding the garment tend to fall near the seams.
We believe that the introduction of seam information could improve the efficiency of the garment unfolding process for the following reasons:
-
•
Seams can be used as a universal garment feature due to their prevalence in different types of garments.
-
•
Seams are more visible than other features when the garment is randomly placed, facilitating the perception process without additional reconfiguration of the garment.
-
•
The introduction of seam information makes it possible to select the grasping points without explicitly using the garment structure, resulting in efficient garment handling.

This paper presents Seam-Informed Strategy (SIS), a novel scheme for selecting actions that can be used for automatic garment handling, such as dual-arm robotic T-shirt unfolding. We use the seam segments and their crossings as reference information to limit the search space for selecting grasping points, as shown in Fig. 1.
To facilitate the SIS, we propose a Seam Feature Extraction Method (SFEM) and a Decision Matrix Iteration Method (DMIM). The SFEM is used to extract seam features as candidates for a pair of dual-arm grasping points. The DMIM is used as a comprehensive grasping points selection method from the extracted candidates for dual-arm unfolding motion planning. This motion involves a sequence of robot motion primitives including grasping, stretching, and flinging to flatten and align the T-shirt.
We train the networks in SFEM with real data annotated by humans. The decision matrix in DMIM is initially computed with human demonstrations and updated with real robot execution results. Using the decision matrix, the robot can also align the orientation of the unfolded T-shirt. We evaluate the efficiency of our scheme on the real robot system through extensive experiments.
In summary, our contributions include:
-
•
We propose SIS, a novel strategy for selecting grasping points for T-shirt unfolding using the seam information. SFEM and DMIM are proposed for this strategy.
-
•
In the proposed SFEM, we formulate the extraction of seam lines as an oriented line object detection problem. This formulation allows the use of any object detection network to efficiently handle curved/straight seam lines.
-
•
We solve the grasping points selection as a scoring problem for combinations of seam segment types using a proposed DMIM, a low-cost solution for unfolding a garment.
-
•
Experimental results demonstrate that the performance of unfolding the T-shirt is promising in terms of obtaining high evaluation metrics with few episode steps.
II Related Work
Unfolding and flattening is the first process that enables various garment manipulation tasks. However, the selection of grasping points remains a challenging aspect of garment unfolding. Much work has been done so far.
II-A Grasping Points Selection Strategies
There are three main types of strategies for selecting grasping points in garment unfolding or other garment handling tasks: heuristic-based strategy, matching-based strategy, and value-map-based strategy.
II-A1 Heuristic-based Strategy
II-A2 Matching-based Strategy
This strategy first maps the observed garment configuration to a canonical configuration and then obtains the grasping points based on the canonical configuration [15]. The method proposed in [16] estimates the garment pose and the corresponding grasping point by searching a database constructed for each garment type.
II-A3 Value-map-based Strategy
II-B Datasets for Learning-based Methods
One of the key issues for learning-based methods is the lack of data sets when dealing with garments due to the difficulty of annotation. To address this problem, most researchers [8, 9, 10, 11, 12, 18] train their networks using simulators. SpeedFolding [17] uses real data annotated by humans together with self-supervised learning. In [4] and [6], the use of color-marked garments is proposed to automatically generate annotated depth maps. The use of transparent fluorescent paints and ultraviolet (UV) light is proposed by [19] to generate RGB images annotated by the paints observed under UV light.
II-C Unfolding Actions Strategies
In addition to the commonly used Pick&Place, the Flinging proposed by FlingBot[8] is widely used for unfolding actions. FabricFolding[12] introduces a Pick&Drag action to improve the unfolding of sleeves. Meanwhile, DextAIRity[20] proposes the use of airflow provided by a blower to indirectly unfold the garment, thereby reducing the workspace of robots. UniFolding[11] uses a Drag&Mop action to reposition the garment when the grasping points are out of the robot’s reach.
III Problem Statement
This paper focuses on automating the process of unfolding a T-shirt from arbitrary configurations by a sequence of grasping, stretching, and flinging motions using a dual-arm manipulator system under the following assumptions:
-
•
Seams are always present on the target T-shirt.
-
•
The shirt is not in an inside-out configuration.
The desired outcomes include reducing the number of necessary steps of robot actions (episode steps) to unfold a T-shirt and increasing the normalized coverage of the final unfolded garment. We also consider the final orientation of the unfolded garment. This section describes the problem to be solved in this paper.

III-A Problem Formulation
Given an observation , captured by an RGB camera with resolution of at episode step , our goal is to develop a policy mapping to an action , i.e. , to make the garment converge to a flattened configuration. The action can be described by the pair of grasping points for the left and right hands, and , w.r.t the image frame .
III-B Outline of the Proposed Scheme
Existing strategies try to map the to a canonical space or a value map to select grasping points pixel-wise. This paper proposes a new Seam-Informed Strategy (SIS) to select grasping points based on seam information. To implement the mapping policy based on SIS, we divide the policy into two methods, and , as shown in Fig. 2.
The method called the Seam Feature Extraction Method (SFEM) is proposed to extract seam line segments and their crossings from . We use the extracted set of seam line segments and the set of seam crossing segments as candidates for grasping points. The method is formulated as follows:
(1) |
The method called the Decision Matrix Iteration Method (DMIM) is proposed to select a pair of grasping points from the candidates obtained above for the unfolding action of the dual-arm robot. The decision matrix takes into account both human prior knowledge and the results of real robot executions. The method is formulated as follows:
(2) |
IV Method
As outlined in Section III-B, this paper primarily addresses two challenges. In Section IV-A, we will explain how to extract sets of grasping point candidates and from an RGB image input using the proposed SFEM at each episode step . In Section IV-B, we will explain how to select a pair of grasping points and from the candidates and using the proposed DMIM at each episode step .
IV-A Seam Feature Extraction Method (SFEM)
The proposed SFEM consists of a seam line-segment extractor (SLE) and two seam crossing-segment detectors (SCD1 and SCD2) as shown in Fig. 2 which are designed based on the YOLOv3 [21]. Extracting seams as segments allows them to be used as grasping point candidates.
IV-A1 Seam Line-segment Extractor (SLE)
The YOLOv3 was originally designed for object detection. For the SLE, we formulate the extraction of curvedstraight seams as an oriented line object detection problem since the YOLOv3 cannot be used directly for seam line segment extraction. The proposed formulation allows any object detection network to extract seam line features.
To extract seams using the object detection network, we first approximate the continuous curved seam as a set of straight line segments. Each straight line segment is described by the seam segment category and its endpoint positions and w.r.t the image frame , i.e. . In this paper, the seam segments are categorized into four categories, namely solid (), dotted (), inward (), and neckline (), as shown in Fig. 3 (a) and (b), based on the seam types of the T-shirt used in the experiments.

We then convert the straight line segment into a bounding box by using the two endpoints as the diagonal vertices of the bounding box. To represent the orientation of the straight line segment in the bounding box , we divide each seam segment category into four orientation subclasses , namely downward diagonal (), upward diagonal (), horizontal (), and vertical (), as shown in Fig. 3 (a). This quadrupled the number of seam segment categories. Consequently, each oriented seam line segment is defined by the oriented seam segment category and the parameters of the bounding box, i.e. , where denote the coordinate of the center of the bounding box. denote the width and height of the bounding box.
This bounding-box-based categorization of the seam line segment is used as the labeling rule for the SLE shown in Algorithm 1. To implement the SLE, we introduce:
-
•
, a threshold for the width and height of the bounding box in pixels: Note that the computation of the loss function during network training is very sensitive to a tiny or thin object whose area is very small. is introduced in Algorithm 1 to guarantee a minimum size of the bounding box.
-
•
Data augmentation of with recategorization: The orientation subclass is not invariant if the image is flipped or rotated during data augmentation. Recategorization is carried out using Algorithm 1 when the image is flipped or rotated.
The proposed SLE can extract both curved and straight seams as a set of seam line segments. The predicted seam segments are superimposed onto the original observation to generate the seam-map , as shown in Fig. 3 (c)
IV-A2 Seam Crossing-segment Detectors (SCDs)
As shown in Fig. 3 (b), three types of seam crossing segments, namely shoulder (), bottom hem (), and neck point () of the T-shirt, are detected by using both SCD1 and SCD2 to increase the recall of the predictions. The inputs of SCD1 and SCD2 are the original image observation and the corresponding seam-map , respectively, as shown in Fig. 2.
Note that the outputs of SCD1 and SCD2 are merged considering the maximum number of each type of crossing segments according to the confidence of the predictions, since the number of each type of seam crossing segments on the T-shirt is limited.
IV-B Decision Matrix Iteration Method (DMIM)
The idea of DMIM is proposed based on the observations of human unfolding actions using a flinging motion:
-
•
The Combination of Seam Segment Types (CSST) of two selected grasping points affects the unfolding performance. For example, grasping two points both at the shoulders or bottom hems results in a higher unfolding quality than the other CSSTs.
-
•
The current step of the unfolding action affects the unfolding performance of subsequent actions, since an appropriate action simplifies the complexity of the subsequent steps. To obtain a simpler configuration for subsequent actions, we intuitively select two grasping points that are empirically far apart.
Based on these observations, we extract the human skill of unfolding by scoring the performance of each CSST through human demonstrations. The DMIM proposed in this paper represents the grasping strategy using a decision matrix. The performance score of each CSST is implemented by initializing the decision matrix with human demonstrations and updating it with robot executions.
IV-B1 Decision Matrix
As shown in Fig. 4, the proposed decision matrix is constructed as an upper triangular matrix. In this paper, six types of seam segments are considered, including shoulder, bottom hem, neck point, solid, dotted, and neckline (the inward seam line type is treated as the dotted type in this paper for simplicity). The element of , , is the performance score of the combination of the and the seam segment types, and expressed as follows:
(4) |
where is the total number of trials of unfolding demonstrations performed by both human and robot. Note that and . For each trial, the average normalized coverage is expressed as follows:
(5) |
where is the number of episode steps in a trial and is the normalized coverage of the T-shirt from an overhead camera observation at episode step during a trial, similar to [8, 17], and [20]. The normalized coverage is calculated by:
(6) |
where the coverage is calculated by counting the pixel number of segmented mask output by the Segment Anything Model (SAM) [22]. The maximum coverage is obtained from an image of a manually flattened T-shirt.
The decision matrix is used to select grasping points from the seam segments extracted by SFEM. Based on the score of in the decision matrix and the types of extracted seam segments, the CSST with the highest score is selected. If there are multiple combinations of candidates belonging to each seam segment type of the selected CSST, the most distant candidate pair is selected as the grasping points.
IV-B2 Initialization with Human Demonstrations and Updating with Robot Executions
The initial decision matrix , shown in Fig. 4 (a), is computed using only the human demonstration data by (4). The same equation is then used to update the decision matrix with subsequent robot executions, resulting in , as shown in Fig. 4 (b).
Human hands exhibit more dynamic and flexible movements that allow fine-tuning of the resulting garment configuration, a capability that the robot manipulators lack. In other words, there is a human-to-robot gap in garment manipulation. To bridge this gap, we perform real robot trials to update the decision matrix. This iterative method ensures that the grasping strategy integrates the learned human experience and the real robot explorations.

V Experiment
The garment unfolding dual-arm manipulator system consists of two Denso VS087 manipulators mounted on a fixed frame, a Basler acA4096-30uc RGB camera equipped with an 8 mm Basler lens, an Azure Kinect DK depth camera, two ATI Axia80-M8 Force Torque (FT) sensors, and two Taiyo EGS2-LS-4230 electric grippers. The robot control system is constructed based on ROS2 Humble [23] installed on a PC equipped with an RTX3090 GPU, an i9-10900KF CPU, and 64 GB of memory.
V-A Motion Primitives Used for Experiments
Below are the major motion primitives associated with the T-shirt unfolding experiments.
Exp. Name | Seam Information | Matrix Iteration | Average evaluation metric at each step over 20 trials (ncov, IoU) | ||||
(SI) | (MI) | 1 | 2 | 3 | 4 | 5 | |
SIS (Full) | 0.707, 0.554 | 0.882, 0.811 | 0.916, 0.842 | 0.903, 0.825 | 0.916, 0.828 | ||
SIS (Ab-SI) | 0.590, 0.493 | 0.765, 0.687 | 0.850, 0.774 | 0.863, 0.795 | 0.863, 0.756 | ||
SIS (Ab-MI) | 0.657, 0.574 | 0.764, 0.693 | 0.823, 0.770 | 0.822, 0.771 | 0.820, 0.764 |

V-A1 Grasp&Fling Motion Primitive
Similar to [8], we implement a Grasp&Fling motion primitive to speed up the process of unfolding the garment. After grasping, the T-shirt is stretched by the robot arms until the stretching force reaches a predefined 0.8 N using the wrist FT sensors attached to both manipulators. The robot then generates a flinging motion that mimics the human flinging motion based on fifth-order polynomial interpolation.
V-A2 Randomize Motion Primitive
To generate initial configurations of the T-shirt that are fair enough for comparison, we randomly select a grasping point from the garment area and release it from a fixed position 0.78 m above the desk using one of the robot arms. During the experiments described in Section V-C and V-D, the robot repeats this motion before each trial until the normalized coverage of the initial configuration is less than 0.4.
V-B Training/Updating of SFEM and DMIM
V-B1 Training of SFEM
The dataset used to train the SLE consists of 328 images with manually annotated labels of seam line segments. Using data augmentation by image rotation and flipping, we obtain 2104 images for training and 520 images for validation with annotated labels. We evaluated the seam line extraction performance of the trained model using a testing set in which garments were randomly placed on the desk. We calculated the Intersection over Union (IoU) between the predicted seam line segments and the manually labeled ground truth, using a line thickness of 20 pixels. The average IoU for seam line extraction is 0.734. Considering the nature of thin objects, this IoU result is reasonable as visualized in Fig. 3 (c).
The dataset used to train SCD1 consists of 1708 original images with manually annotated labels of seam crossing segments. The dataset used to train SCD2 consists of the same 1708 images overlaid with the seam lines extracted by the trained SLE. We evaluated the seam crossing detection performance of the merged outputs of trained SCDs using precision and recall. The average precision is 0.915 and the average recall is 0.878 during our experiments.
The original images in all datasets are captured by the Basler camera. The resolution of the images used for training is 1280 1280, obtained by cropping and resizing the original images.
V-B2 Initialization and Updating of DMIM
The decision matrix is initialized with human demonstrations. For each human trial, we set the maximum number of episode steps to , although two to three steps are usually enough to flatten the T-shirt. Once the T-shirt is flattened, we skip the remaining steps and use the same as in this step for the remaining steps.
To initialize a for each CSST, the demonstrator is asked to randomize the T-shirt until at least one point pair in the CSST appears. The demonstrator then selects the furthest pair of points as grasping points. In the following steps of this trial, the demonstrator selects grasping points based on the demonstrator’s intuition, and the of the CSST is calculated. The matrix is initialized as shown in Fig. 4 (a). Each is computed using (4) with the number of trials .
To update the decision matrix, we conduct robot trials to bridge the human-to-robot gap in unfolding actions. Each robot trial consists of five episode steps. As shown in Fig. 4 (b), is updated from with (4).

V-C Ablation Studies of the Proposed Strategy
In this section, we perform ablation studies for our proposed strategy. In the following experiments, 20 trials are performed using two trained clothes in Category T in Fig. 7, with each trial consisting of five episode steps. The results are shown in Table I and Fig. 5. We use normalized coverage [8, 17, 20] and IoU between the current configuration and the manually unfolded configuration [9] as evaluation metrics. Fig. 5 shows the distributions of normalized coverage with 95% confidence interval shading.
In Table I, the Full group is the complete scheme proposed in this paper based on seam information and DMIM using .
The Ab-SI group is carried out without the seam information extracted by the SLE and uses only the crossings detected by the SCD2 as candidate grasping points, while is used. The difference between the Ab-SI and Full groups in Fig. 5 (a)(b) shows that the seam information significantly improves the unfolding performance.
The Ab-MI group is performed to show the effectiveness of the proposed iteration method. In this experiment, is used without updating. Fig. 5 (c)(d) shows the human-to-robot gap in the performance of the unfolding action. The results of the Full and Ab-MI groups show that our proposed DMIM bridges the human-to-robot gap.
Configuration | Episode Step | ||||
---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | |
Ab-SI vs. Full | |||||
ncov | 0.026* | 0.022* | 0.053 | 0.177 | 0.106 |
IoU | 0.123 | 0.022* | 0.082 | 0.262 | 0.086 |
Ab-MI vs. Full | |||||
ncov | 0.222 | 0.013* | 0.004** | 0.016* | 0.007** |
IoU | 0.642 | 0.009** | 0.018* | 0.089 | 0.055 |
-
*
Statistically significant performance decrease
-
**
Statistically highly significant performance decrease
We conduct a right-tailed Welch’s t-test to analyze the differences in the means of the ncov and IoU metrics before and after removing one component from the full scheme. Each group consists of 20 samples at each episode step. The null hypothesis posits that the Full group does not outperform the ablated configuration. The alternative hypothesis posits that the Full group significantly outperforms the ablated configuration. We reject the null hypothesis in favor of the alternative hypothesis when the p-value is less than the significance level , concluding that the performance of the Full group is significantly superior to that of the Ab-SI or Ab-MI group.

As shown in Table II, removing the Seam Information (SI) component results in statistically significant decreases in ncov during the early steps (). Similarly, removing the Matrix Iteration (MI) component leads to statistically significant decreases in ncov in later steps, with some decreases being highly significant (). The IoU results exhibit similar trends. Notably, both groups show significant performance drops during the second step, highlighting the advantages of our proposed method in addressing challenging scenarios, particularly when the garment is in a crumpled state.
Note that the following hardware-related failures are excluded from the statistical data since they are not relevant to the performance of our proposed strategy.
-
•
Grasping failures: We consider a grasp to have failed if either one or both of the grippers failed to grasp the selected grasping points on the T-shirt. We also consider the grasp to have failed if the T-shirt falls off the gripper(s) during the robot motion. Note that similar to [8], we have not filtered out the data when the T-shirt is grasped in its crumpled state, which makes the flinging motion ineffective.
-
•
Motion failures: These occur when the grasping points exceed the working space of the robot, or when an emergency stop is triggered due to robot singularity.
-
•
Releasing failures: We consider a garment release to have failed if the garment remains in the gripper after the gripper fingers open.
V-D Evaluation of unfolding performance
To evaluate the generalization ability of our proposed scheme, we conducted experiments with four unseen short-sleeved T-shirts in Category A in Fig. 7. In Fig. 6 (a), we plotted the average result of normalized coverage in 20 unfolding trials, each with five episode steps. Five continuous unfolding trials are performed for each of the four T-shirts. On average, the normalized coverage reaches over 0.865, 0.909, and 0.894 within three, four, and five steps, respectively.
We compare the proposed method in this paper with those presented in [8][17][20], utilizing normalized coverage as the evaluation metric. As shown in Fig. 6 (a), our SIS scheme outperforms existing methods. Note that our approach is simpler by using only a Grasp&Fling motion, while SpeedFolding needs additional motions such as Pick&Place and Drag in later steps. In addition, the proposed SIS aligns the T-shirt to a specific goal configuration, which SpeedFolding does not consider. We use IoU of resulting configuration with goal configuration to evaluate the orientation alignment performance on Category A. Our proposed SIS reaches 0.799, 0.826, and 0.831 average IoU within three, four, and five episode steps, respectively.

In Fig. 6 (b) and (c), we have also plotted the normalized coverage of Category B and C. Category B contains two short-sleeved T-shirts with a pocket attached to the front with seams. Category C contains two short-sleeved sports T-shirts that have solid seams that are not in the contour area when fully unfolded. According to Welch’s t-test, there is no significant difference in performance between experiments using Categories A and B. Therefore, we conclude that the seams on the pocket do not significantly affect the unfolding performance of our scheme. This is because the priority of selecting the dotted seams corresponding to the pockets is low, as shown in Fig. 4. Even for the challenging cases of Category C, the average coverage in ten trials of our scheme reaches more than 0.83 within five episode steps, demonstrating that our scheme can also be applied to such types of sports T-shirts.
Using the same models without additional training/updating, we conduct experiments on different types of garments, such as long-sleeved T-shirt and sleeveless shirt, shown in Category D and E in Fig. 7. A total of ten unfolding trials are performed on two cloth samples for each type of garment. The results are shown in Fig. 6 (d) and (e). Although the performance suffers some degradation, our scheme can still unfold the garments without any additional retraining/updating. We conclude that our scheme can be applied to these two types of garments. Example results of unfolding trials are shown in Fig. 8.
VI Conclusion
In this paper, we propose a Seam-Informed Strategy (SIS), which uses seam information to select a pair of grasping points for unfolding a T-shirt with a dual-arm robot system. Our strategy uses SFEM to extract grasping point candidates using seam information and DMIM to select the grasping points for unfolding the T-shirt while aligning its orientation in a low-cost manner. Experimental results using various unseen T-shirts and unseen types of garments have shown that the proposed SIS effectively handles T-shirt unfolding. The performance of T-shirt unfolding is promising in terms of obtaining high evaluation metrics with few episode steps.
Our future work aims to address the limitations of the current scheme when handling seamless garments, such as knitted garments, to improve the practicality and efficiency of our systems.
References
- [1] D. Triantafyllou, I. Mariolis, A. Kargakos, S. Malassiotis, and N. Aspragathos, “A geometric approach to robotic unfolding of garments,” Robot. Auton. Syst. (RAS), vol. 75, pp. 233–243, 2016.
- [2] A. Doumanoglou, J. Stria, G. Peleka, I. Mariolis, V. Petrik, A. Kargakos, L. Wagner, V. Hlaváč, T.-K. Kim, and S. Malassiotis, “Folding clothes autonomously: A complete pipeline,” IEEE Trans. Robot. (TRO), vol. 32, no. 6, pp. 1461–1478, 2016.
- [3] J. Stria, V. Petrík, and V. Hlaváč, “Model-free approach to garments unfolding based on detection of folded layers,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 2017, pp. 3274–3280.
- [4] A. Gabas and Y. Kita, “Physical edge detection in clothing items for robotic manipulation,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2017, pp. 524–529.
- [5] X. Lin, Y. Wang, Z. Huang, and D. Held, “Learning visible connectivity dynamics for cloth smoothing,” in Proc. Mach. Learn. Res. (PMLR), vol. 164, 08–11 Nov 2022, pp. 256–266.
- [6] W. Chen, D. Lee, D. Chappell, and N. Rojas, “Learning to grasp clothing structural regions for garment manipulation tasks,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 2023.
- [7] R. Wu, H. Lu, Y. Wang, Y. Wang, and H. Dong, “Unigarmentmanip: A unified framework for category-level garment manipulation via dense visual correspondence,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2024, pp. 16 340–16 350.
- [8] H. Ha and S. Song, “Flingbot: The unreasonable effectiveness of dynamic manipulation for cloth unfolding,” in Proc. Conf. Robot. Learn. (CoRL), 2021, pp. 24–33.
- [9] A. Canberk, C. Chi, H. Ha, B. Burchfiel, E. Cousineau, S. Feng, and S. Song, “Cloth funnels: Canonicalized-alignment for multi-purpose garment manipulation,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2023, pp. 5872–5879.
- [10] N.-Q. Gu, R. He, and L. Yu, “Learning to unfold garment effectively into oriented direction,” IEEE Robot. Autom. Lett. (RAL), 2023.
- [11] H. Xue, Y. Li, W. Xu, H. Li, D. Zheng, and C. Lu, “Unifolding: Towards sample-efficient, scalable, and generalizable robotic garment folding,” Proc. Conf. Robot. Learn. (CoRL), pp. 3321–3341, 2023.
- [12] C. He, L. Meng, Z. Sun, J. Wang, and M. Q.-H. Meng, “Fabricfolding: learning efficient fabric folding without expert demonstrations,” Robotica, vol. 42, no. 4, pp. 1281–1296, 2024.
- [13] F. Osawa, H. Seki, and Y. Kamiya, “Unfolding of massive laundry and classification types by dual manipulator,” J. Adv. Comput. Intell. Intell. Inform. (JACIII), vol. 11, no. 5, pp. 457–463, 2007.
- [14] A. Doumanoglou, T.-K. Kim, X. Zhao, and S. Malassiotis, “Active random forests: An application to autonomous unfolding of clothes,” in Proc. Eur. Conf. Comput. Vis. (ECCV). Springer, 2014, pp. 644–658.
- [15] C. Chi and S. Song, “Garmentnets: Category-level pose estimation for garments via canonical space shape completion,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2021, pp. 3324–3333.
- [16] Y. Li, D. Xu, Y. Yue, Y. Wang, S.-F. Chang, E. Grinspun, and P. K. Allen, “Regrasping and unfolding of garments using predictive thin shell modeling,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2015, pp. 1382–1388.
- [17] Y. Avigal, L. Berscheid, T. Asfour, T. Kröger, and K. Goldberg, “Speedfolding: Learning efficient bimanual folding of garments,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 2022, pp. 1–8.
- [18] D. Tanaka, S. Arnold, and K. Yamazaki, “Emd net: An encode–manipulate–decode network for cloth manipulation,” IEEE Robot. Autom. Lett. (RAL), vol. 3, no. 3, pp. 1771–1778, 2018.
- [19] L. Y. Chen, B. Shi, D. Seita, R. Cheng, T. Kollar, D. Held, and K. Goldberg, “Autobag: Learning to open plastic bags and insert objects,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2023, pp. 3918–3925.
- [20] Z. Xu, C. Chi, B. Burchfiel, E. Cousineau, S. Feng, and S. Song, “Dextairity: Deformable manipulation can be a breeze,” in Proc. Robot. Sci. Syst. (RSS), 2022.
- [21] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv:1804.02767, Apr. 2018.
- [22] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, P. Dollar, and R. Girshick, “Segment anything,” in Proc. IEEE/CVF Int. Conf. Comput. Vis (ICCV), October 2023, pp. 4015–4026.
- [23] S. Macenski, T. Foote, B. Gerkey, C. Lalancette, and W. Woodall, “Robot operating system 2: Design, architecture, and uses in the wild,” Sci. Robot., vol. 7, no. 66, p. eabm6074, 2022.